Otter-Grader Documentation
|
|
|
- Jasmine Griffith
- 5 years ago
- Views:
Transcription
1 UCBDS Infrastructure Team Sep 15, 2020
2
3 CONTENTS 1 Tutorial 1 2 Test Files 9 3 Creating Assignments 13 4 Student Usage 31 5 Otter Configuration Files 37 6 Grading Locally 39 7 Grading on Gradescope 47 8 Grading on a Deployable Service 61 9 Submission Execution PDF Generation and Filtering Intercell Seeding Logging Resources Changelog Installation 93 Index 95 i
4 ii
5 CHAPTER ONE TUTORIAL This tutorial can help you to verify that you have installed Otter correctly and introduce you to the general Otter workflow. Once you have installed Otter, download this zip file and unzip it into some directory on your machine; you should have the following directory structure: tutorial assign - assign-demo.ipynb generate - requirements.txt tests - q1.py - q2.py - q3.py grade - meta.json ipynbs - demo-fails1.ipynb - demo-fails2.ipynb - demo-fails2hidden.ipynb - demo-fails3.ipynb - demo-fails3hidden.ipynb - demo-passesall.ipynb tests - q1.py - q2.py - q3.py zips - demo-fails1.zip - demo-fails2.zip - demo-fails2hidden.zip - demo-fails3.zip - demo-fails3hidden.zip - demo-passesall.zip 1
6 1.1 Quickstart Tutorial This section describes the basic execution of Otter s tools using the provided zip file. It is meant to verify your installation and too loosely describe how Otter tools are used. This section includes Otter Assign, Otter Generate, and Otter Grade Otter Assign Start by cding into tutorial/assign. This directory includes the master notebook assign-demo.ipynb. Look over this notebook to get an idea of its structure. It contains five questions, four code and one Markdown (two of which are manually-graded). Also note that the assignment configuration in the first cell tells Otter Assign to generate a solutions PDF and a Gradescope autograder zip file and to include special submission instructions before the export cell. To run Otter Assign on this notebook, run $ otter assign assign-demo.ipynb dist Generating views... Generating solutions PDF... Generating autograder zipfile... Running tests... All tests passed! Otter Assign should create a dist subdirectory which contains two further subdirectories: autograder and student. The autograder directory contains the Gradescope autograder, solutions PDF, and the notebook with solutions. The student directory contains just the sanitized student notebook. Both contain a tests subdirectory that contains tests, but only autograder/tests has the hidden tests. assign - assign-demo.ipynb dist autograder - assign-demo.ipynb - assign-demo-sol.pdf - autograder.zip tests - q1.py - q2.py - q3.py student - assign-demo.ipynb tests - q1.py - q2.py - q3.py For more information about the configurations for Otter Assign and its output format, see Distributing Assignments. 2 Chapter 1. Tutorial
7 1.1.2 Otter Generate Start by cding into tutorial/generate. We have provided premade tests and a requirements file. Running Otter Generate is very simple if there are few needed configurations: just run otter generate autograder and it will automatically find the tests in./tests and the requirements in./requirements.txt. otter generate autograder Otter Generate has quite a few options and configurations. For more information, see Grading on Gradescope Otter Grade Start by cding into tutorial/grade. The first thing to note is that we have provided a metadata file that maps student identifiers to filenames in meta.json. Note that metadata files are optional when using Otter, but we have provided one here to demonstrate their use. This metadata file lists only the files in the ipynbs subdirectory, so we won t use it when grading zips. [ ] { }, { }, { }, { }, { }, { } "identifier": "passesall", "filename": "demo-passesall.ipynb" "identifier": "fails1", "filename": "demo-fails1.ipynb" "identifier": "fails2", "filename": "demo-fails2.ipynb" "identifier": "fails2hidden", "filename": "demo-fails2hidden.ipynb" "identifier": "fails3", "filename": "demo-fails3.ipynb" "identifier": "fails3hidden", "filename": "demo-fails3hidden.ipynb" The filename and identifier of each notebook indicate which tests should be failing; for example, demo-fails2. ipynb fails all cases for q2 and demo-fails2hidden.ipynb fails the hidden test cases for q2. Let s now construct a call to Otter that will grade these notebooks. We know that we have JSON-formatted metadata, so we ll be use the -j metadata flag. Our notebooks are in the ipynbs subdirectory, so we ll need to use the -p flag. The notebooks also contain a couple of written questions, and the filtering is implemented using HTML comments, so we ll specify the --html-filter flag. Let s run Otter on the notebooks: otter grade -p ipynbs -j meta.json --html-filter -v 1.1. Quickstart Tutorial 3
8 (I ve added the -v flag so that we get verbose output.) After this finishes running, there should be a new file and a new folder in the working directory: final_grades.csv and submission_pdfs. The former should contain the grades for each file, and should look something like this: identifier,manual,q1,q2,q3,total,possible fails2,submission_pdfs/demo-fails2.pdf,1.0,0.0,1.0,2.0,3 fails3,submission_pdfs/demo-fails3.pdf,1.0,1.0,0.0,2.0,3 fails2hidden,submission_pdfs/demo-fails2hidden.pdf,1.0,0.0,1.0,2.0,3 fails1,submission_pdfs/demo-fails1.pdf,0.0,1.0,1.0,2.0,3 fails3hidden,submission_pdfs/demo-fails3hidden.pdf,1.0,1.0,0.0,2.0,3 passesall,submission_pdfs/demo-passesall.pdf,1.0,1.0,1.0,3.0,3 Let s make that a bit prettier: The latter, the submission_pdfs directory, should contain the filtered PDFs of each notebook (which should be relatively similar). Otter Grade can also grade the zip file exports provided by the Notebook.export method. To do this, just add the -z flag to your call to indicate that you re grading these zip files. We have provided some, with the same notebooks as above, in the zips directory, so let s grade those: otter grade -p zips -vz This should have the same CSV output as above but no submission_pdfs directory since we didn t tell Otter to generate PDFs. 1.2 Using Otter Now that you have verified that your isntallation is working, let s learn more about Otter s general workflow. Otter has two major use cases: grading on the instructor s machine (local grading), and generating files to use Gradescope s autograding infrastructure Local Grading Get started by creating some test cases and creating a requirements.txt file (if necessary). Collecting Student Submissions The first major decision is how you ll collect student submissions. You can collect these however you want, although Otter has builtin compatibility with Gradescope and Canvas. If you choose either Gradescope or Canvas, just export the submissions and unzip that into some directory. If you are collecting another way, you may want to create a metadata file. You can use either JSON or YAML format, and the structure is pretty simple: each element needs to have a filename and a student identifier. Metadata files are optional, and if not provided, grades will be primary keyed on the submission filename. A sample YAML metadata file would be: - identifier: 0 filename: test-nb-0.ipynb - identifier: 1 filename: test-nb-1.ipynb - identifier: 2 filename: test-nb-2.ipynb - identifier: 3 filename: test-nb-3.ipynb (continues on next page) 4 Chapter 1. Tutorial
9 - identifier: 4 filename: test-nb-4.ipynb - identifier: 5 filename: test-nb-5.ipynb... (continued from previous page) Support Files If you have any files that are needed by the notebooks (e.g. data files), put these into the directory that contains the notebooks. You should also have your directory of tests nearby. At this stage, your directory should probably look something like this: grading - requirements.txt submissions - meta.yml - nb0.ipynb - nb1.ipynb - nb2.ipynb - nb3.ipynb - nb4.ipynb - nb5.ipynb - data.csv tests - q1.py - q2.py - q3.py - q4.py Grading Now that you ve set up the directory, let s get down to grading. Go to the terminal, cd into your grading directory (grading in the example above), and let s build the otter grade command. The first thing we need is our notebooks path or -p argument. Otter assumes this is./, but our notebooks are located in./submissions, so we ll need -p submissions in our command. We also need a tests directory, which Otter assumes is at./tests; because this is where our tests are, we re alright on this front, and don t need a -t argument. Now we need to tell Otter how we ve structured our metadata. If you re using a Gradescope or Canvas export, just pass the -g or -c flag, respectively, with no arguments. If you re using a custom metadata file, as in our example, pass the -j or -y flag with the path to the metadata file as its argument; in our case, we will pass -y submissions/ meta.yml. At this point, we need to make a decision: do we want PDFs? If there are questions that need to be manually graded, it might be nice to generate a PDF of each submission so that it can be easily read; if you want to generate PDFs, pass one of the --pdf, --tag-filter, or --html-filter flags (cf. PDFs). For our example, let s say that we do want PDFs. Now that we ve made all of these decisions, let s put our command together. Our command is: otter grade -p submissions -y meta.yml --pdf -v Note that Otter automatically found our requirements file at./requirements.txt. If it had been in a different location, we would have needed to pass the path to it to the -r flag. Note also that we pass the -v flag so that it prints 1.2. Using Otter 5
10 verbose output. Once this command finishes running, you will end up with a new file and a new folder in your working directory: grading - final_grades.csv - requirements.txt submission_pdfs - nb0.pdf - nb1.pdf - nb2.pdf - nb3.pdf - nb4.pdf - nb5.pdf submissions... tests... Otter created the final_grades.csv file with the grades for each student, broken down by test, and the submission_pdfs directory to house the PDF that was generated of each notebook. Congrats, you re done! You can use the grades in the CSV file and the PDFs to complete grading however you want. You can find more information about otter grade here Gradescope To get started using Otter with Gradescope, create some test cases and a requirements.txt file (if necessary). Once you have these pieces in place, put them into a directory along with any additional files that your notebook requires (e.g. data files), for example: gradescope - data.csv - requirements.txt - utils.py tests - q1.py - q2.py... To create the zipfile for Gradescope, use the otter generate command after cding into the directory you created. For the directory above, once I ve cded into gradescope, I would run the following to generate the zipfile: otter generate data.csv utils.py As above, Otter automatically found our requirements file at./requirements.txt. Notice also that we didn t indicate the path to the tests directory; this is because the default argument of the -t flag is./tests, so Otter found them automatically. After this command finishes running, you should have a file called autograder.zip in the current working directory: gradescope - autograder.zip - data.csv - requirements.txt - utils.py (continues on next page) 6 Chapter 1. Tutorial
11 tests - q1.py - q2.py... (continued from previous page) To use this zipfile, create a Programming Assignment on Gradescope and upload this zipfile on the Configure Autograder page of the assignment. Gradescope will then build a Docker image on which it will grade each student s submission. You can find more information about Gradescope usage here Using Otter 7
12 8 Chapter 1. Tutorial
13 CHAPTER TWO TEST FILES 2.1 Python: OK Format Otter has different test file formats depending on which language you are grading. Python test files follow the OK format, a legacy of the OkPy autograder that Otter inherits from. R test files are more like unit tests and rely on the testthat library to run tests with expectations. Otter requires OK-formatted tests to check students work against. These have a very specific format, described in detail in the OkPy documentation. There is also a resource we developed on writing autograder tests that can be found here; this guide details things like the doctest format, the pitfalls of string comparison, and seeding tests Caveats While Otter uses OK format, there are a few caveats to the tests when using them with Otter. Otter only allows a single suite in each test, although the suite can have any number of cases. This means that test["suites"] should be a list of length 1, whose only element is a dict. Otter uses the "hidden" key of each test case only on Gradescope. When displaying results on Gradescope, the test["suites"][0]["cases"][<int>]["hidden"] should evaluate to a boolean that indicates whether or not the test is hidden. The behavior of showing and hiding tests is described in Grading on Gradescope Writing OK Tests We recommend that you develop assignments using Otter Assign, a tool which will generate these test files for you. If you already have assignments or would prefer to write them yourself, you can find an online OK test generator that will assist you in generating these test files without using Otter Assign Sample Test Here is an annotated sample OK test: test = { "name": "q1", # name of the test "points": 1, # number of points for the entire suite "suites": [ # list of suites, only 1 suite allowed! { "cases": [ # list of test cases { # each case is a dict (continues on next page) 9
14 } multiline below) Gradescope ] }, }, { (continued from previous page) "code": r""" # test, formatted for Python interpreter >>> 1 == 1 # note that in any subsequence line of a True """, "hidden": False, "locked": False, "code": r""" >>> for i in range(4):... print(i == 1) False True False False """, "hidden": False, "locked": False, }, ], "scored": False, "setup": "", "teardown": "", "type": "doctest" # statement, the prompt becomes... (see # used to determine case visibility on # ignored by Otter # ignored by Otter # ignored by Otter # ignored by Otter # the type of test; only "doctest" allowed 2.2 R: testthat Format Ottr tests rely on the testthat library to run tests on students submissions. Because of this, ottr s tests look like unit tests with a couple of important caveats. The main different is that a global test_metadata variable is required, which is a multiline YAML-formatted string that contains configurations for each test case (e.g. point values, whether the test case is hidden). Each call to testthat::test_that in the test file must have a corresponding entry in the test metadata and these are linked together by the description of the test. An example test file would be: library(testthat) test_metadata = " name: q1 cases: - name: q1a points: 1 hidden: false - name: q1b hidden: true - name: q1c " test_that("q1a", { expect_equal(a, ) (continues on next page) 10 Chapter 2. Test Files
15 }) (continued from previous page) test_that("q1a", { expected = c(1, 2, 3, 4) expect_equal(some_vec, expected) }) test_that("q1a", { coeffs = model$coefficeints expected = c("(intercept)" = 1.25, "Sepal.Length" = -3.40) expect_equal(coeffs, expected) }) Each call to testthat::test_that has a description that matches the name of an element in the cases list in the test metadata. If a student passes that test case, they are awarded the points in case[points] (defaults to 1). The hidden key of each case is used on Gradescope and determines whether a test case result is visible to students; this key defaults to false. The example above is worth 3 points in total, 1 for each test case, and the only visible test case is q1a R: testthat Format 11
16 12 Chapter 2. Test Files
17 CHAPTER THREE CREATING ASSIGNMENTS 3.1 Python Notebook Format Otter ships with an assignment development and distribution tool called Otter Assign, an Otter-compliant fork of jassign that was designed for OkPy. Otter Assign allows instructors to create assignments by writing questions, prompts, solutions, and public and private tests all in a single notebook, which is then parsed and broken down into student and autograder versions. Otter s notebook format groups prompts, solutions, and tests together into prompts. Autograder tests are specified as cells in the notebook and their output is used as the expected output of the autograder when genreating tests. Each question has metadata, expressed in a code block in YAML format when the question is declared. Tests generated by Otter Assign follow the Otter-compliant OK format. Note: Otter Assign is also backwards-compatible with jassign-formatted notebooks. To use jassign format with Otter Assign, specify the --jassign flag in your call to otter assign. While the formats are very similar, jassign s format has some key differences to the Otter Assign format, and many of the behaviors described below, e.g. intercell seeding, are not compatible with jassign format. For more information about formatting notebooks for jassign, see its documentation Assignment Metadata In addition to various command line arugments discussed below, Otter Assign also allows you to specify various assignment generation arguments in an assignment metadata cell. These are very similar to the question metadata cells described in the next section. Assignment metadata, included by convention as the first cell of the notebook, places YAML-formatted configurations inside a code block that begins with BEGIN ASSIGNMENT: ``` BEGIN ASSIGNMENT init_cell: false export_cell: true... ``` This cell is removed from both output notebooks. These configurations, listed in the YAML snippet below, can be overwritten by their command line counterparts (e.g. init_cell: true is overwritten by the --no-init-cell flag). The options, their defaults, and descriptions are listed below. Any unspecified keys will keep their default values. For more information about many of these arguments, see Usage and Output. Any keys that map to sub-dictionaries (e.g. export_cell, generate) can have their behaviors turned off by changing their value to false. The only one that defaults to true (with the specified sub-key defaults) is export_cell. 13
18 run_tests: true # whether to run tests on the resulting autograder directory requirements: requirements.txt # path to a requirements file for Gradescope; appended by default overwrite_requirements: false # whether to overwrite Otter's default requirements rather than appending init_cell: true # include an Otter initialization cell at the top of the notebook check_all_cell: true # include a check-all cell at the end of the notebook export_cell: # include an export cell at the end of the notebook; set to false for no cell pdf: true # include a PDF in the export zip file filtering: true # whether the PDF in the export should be filtered instructions: '' # additional instructions for submission included above export cell template_pdf: false # whether to generate a manual question template PDF for Gradescope generate: # configurations for running Otter Generate; defaults to false points: null # number of points to scale assignment to on Gradescope threshold: null # a pass/fail threshold for the assignment on Gradescope show_stdout: false # whether to show grading stdout to students once grades are published show_hidden: false # whether to show hidden test results to students once grades are published grade_from_log: false # whether to grade students' submissions from serialized environments in the log seed: null # a seed for intercell seeding during grading public_multiplier: null # a percentage of test points to award for passing public tests pdfs: # configurations for generating PDFs for manually- graded questions. defaults to false course_id: '' # Gradescope course ID for uploading PDFs for manually- graded questions assignment_id: '' # Gradescope assignment ID for uploading PDFs for manually-graded questions filtering: true # whether the PDFs should be filtered service: # confgiurations for Otter Service notebook: '' # path to the notebook to submit if different from the master notebook name endpoint: '' # the endpoint for your Otter Service deployment; required auth: google # auth type for your Otter Service deployment assignment_id: '' # the assignment ID from the Otter Service database class_id: '' # the class ID from the Otter Service database save_environment: false # whether to save students' environments in the log for grading variables: {} # a mapping of variable names -> types for serialization ignore_modules: [] # a list of module names whose functions to ignore during serialization files: [] # a list of file paths to include in the distribution directories All paths specified in the configuration should be relative to the directory containing the master notebook. If, for example, I was running Otter Assign on the lab00.ipynb notebook in the structure below: 14 Chapter 3. Creating Assignments
19 dev - requirements.txt lab lab00 - lab00.ipynb - utils.py data - data.csv and I wanted my requirements from dev/requirements.txt to be include, my configuration would look something like this: requirements:../../requirements.txt files: - data/data.csv - utils.py... A note about Otter Generate: the generate key of the assignment metadata has two forms. If you just want to generate and require no additional arguments, set generate: true in the YAML and Otter Assign will simply run otter generate from the autograder directory (this will also include any files passed to files, whose paths should be relative to the directory containing the notebook, not to the directory of execution). If you require additional arguments, e.g. points or show_stdout, then set generate to a nested dictionary of these parameters and their values: generate: seed: 42 show_stdout: true show_hidden: true You can also set the autograder up to automatically upload PDFs to student submissions to another Gradescope assignment by setting the necessary keys in the pdfs subkey of generate: generate: pdfs: token: YOUR_GS_TOKEN class_id: 1234 assignment_id: 5678 filtering: true # required # required # required # true is the default If you have an Otter Service deployment to which you would like students to submit, the necessary configurations for this submission can be specified in the service key of the assignment metadata. This has the required keys endpoint (the URL of the VM), assignment_id (the ID of the assignment in the Otter Service database), and class_id (the class ID in the database). You can optionally also set an auth provider with the auth key (which defaults to google). service: endpoint: assignment_id: hw00 class_id: some_class auth: google # required # required # required # the default If you are grading from the log or would like to store students environments in the log, use the save_environment key. If this key is set to true, Otter will serialize the stuednt s environment whenever a check is run, as described in Logging. To restrict the serialization of variables to specific names and types, use the variables key, which maps variable names to fully-qualified type strings. The ignore_modules key is used to ignore functions from specific modules. To turn on grading from the log on Gradescope, set generate[grade_from_log] to true Python Notebook Format 15
20 The configuration below turns on the serialization of environments, storing only variables of the name df that are pandas dataframes. save_environment: true variables: df: pandas.core.frame.dataframe As an example, the following assignment metadata includes an export cell but no filtering, no init cell, and calls Otter Generate with the flags --points 3 --seed 0. ``` BEGIN ASSIGNMENT filtering: false init_cell: false generate: points: 3 seed: 0 ``` Autograded Questions Here is an example question in an Otter Assign-formatted notebook: For code questions, a question is a description Markdown cell, followed by a solution code cell and zero or more test code cells. The description cell must contain a code block (enclosed in triple backticks ```) that begins with BEGIN QUESTION on its own line, followed by YAML that defines metadata associated with the question. The rest of the code block within the description cell must be YAML-formatted with the following fields (in any order): name (required) - a string identifier that is a legal file name (without an extension) manual (optional) - a boolean (default false); whether to include the response cell in a PDF for manual grading 16 Chapter 3. Creating Assignments
21 points (optional) - a number (default 1); how many points the question is worth As an example, the question metadata below indicates an autograded question q1 worth 1 point. ``` BEGIN QUESTION name: q1 manual: false ``` Solution Removal Solution cells contain code formatted in such a way that the assign parser replaces lines or portions of lines with prespecified prompts. Otter uses the same solution replacement rules as jassign. From the jassign docs: A line ending in # SOLUTION will be replaced by..., properly indented. If that line is an assignment statement, then only the expression(s) after the = symbol will be replaced. A line ending in # SOLUTION NO PROMPT or # SEED will be removed. A line # BEGIN SOLUTION or # BEGIN SOLUTION NO PROMPT must be paired with a later line # END SOLUTION. All lines in between are replaced with... or removed completely in the case of NO PROMPT. A line """ # BEGIN PROMPT must be paired with a later line """ # END PROMPT. The contents of this multiline string (excluding the # BEGIN PROMPT) appears in the student cell. Single or double quotes are allowed. Optionally, a semicolon can be used to suppress output: """; # END PROMPT def square(x): y = x * x # SOLUTION NO PROMPT return y # SOLUTION nine = square(3) # SOLUTION would be presented to students as def square(x):... nine =... And pi = 3.14 if True: # BEGIN SOLUTION radius = 3 area = radius * pi * pi # END SOLUTION print('a circle with radius', radius, 'has area', area) def circumference(r): # BEGIN SOLUTION NO PROMPT return 2 * pi * r # END SOLUTION """ # BEGIN PROMPT # Next, define a circumference function. pass """; # END PROMPT would be presented to students as 3.1. Python Notebook Format 17
22 pi = 3.14 if True:... print('a circle with radius', radius, 'has area', area) def circumference(r): # Next, define a circumference function. pass Test Cells The test cells are any code cells following the solution cell that begin with the comment ## Test ## or ## Hidden Test ## (case insensitive). A Test is distributed to students so that they can validate their work. A Hidden Test is not distributed to students, but is used for scoring their work. Note: Currently, the conversion to OK format does not handle multi-line tests if any line but the last one generates output. So, if you want to print twice, make two separate test cells instead of a single cell with: print(1) print(2) If a question has no solution cell provided, the question will either be removed from the output notebook entirely if it has only hidden tests or will be replaced with an unprompted Notebook.check cell that runs those tests. In either case, the test files are written, but this provides a way of defining additional test cases that do not have public versions. Note, however, that the lack of a Notebook.check cell for questions with only hidden tests means that the tests are run at the end of execution, and therefore are not robust to variable name collisions. Intercell Seeding Otter Assign maintains support for intercell seeding by allowing seeds to be set in solution cells. To add a seed, write a line that ends with # SEED; when Otter runs, this line will be removed from the student version of the notebook. This allows instructors to write code with deterministic output, with which hidden tests can be generated. Note that seed cells are removed in student outputs, so any results in that notebook may be different from the provided tests. However, when grading, seeds are executed between each cell, so if you are using seeds, make sure to use the same seed every time to ensure that seeding before every cell won t affect your tests. You will also be required to set this seed as a configuration of the generate key of the assignment metadata if using Otter Generate with Otter Assign Manually Graded Questions Otter Assign also supports manually-graded questions using a similar specification to the one described above. To indicate a manually-graded question, set manual: true in the question metadata. A manually-graded question is defined by three parts: A question cell with metadata (Optionally) a prompt cell A solution cell Manually-graded solution cells have two formats: If a code cell, they can be delimited by solution removal syntax as above. If a Markdown cell, the start of at least one line must match the regex (<strong> \*{2})solution:? (<\/strong> \*{2}). 18 Chapter 3. Creating Assignments
23 The latter means that as long as one of the lines in the cell starts with SOLUTION (case insensitive, with or without a colon :) in boldface, the cell is considered a solution cell. If there is a prompt cell for manually-graded questions (i.e. a cell between the question cell and solution cell), then this prompt is included in the output. If none is present, Otter Assign automatically adds a Markdown cell with the contents _Type your answer here, replacing this text._. Manually graded questions are automatically enclosed in <!-- BEGIN QUESTION --> and <!-- END QUESTION --> tags by Otter Assign so that only these questions are exported to the PDF when filtering is turned on (the default). In the autograder notebook, this includes the question cell, prompt cell, and solution cell. In the student notebook, this includes only the question and prompt cells. The <!-- END QUESTION --> tag is automatically inserted at the top of the next cell if it is a Markdown cell or in a new Markdown cell before the next cell if it is not. An example of a manually-graded code question: An example of a manually-graded written question (with no prompt): An example of a manuall-graded written question with a custom prompt: 3.1. Python Notebook Format 19
24 3.2 R Notebook Format Otter Assign is compatible with Otter s R autograding system and currently supports Jupyter notebook master documents. The format for using Otter Assign with R is very similar to the Python format with a few important differences Assignment Metadata As with Python, Otter Assign for R also allows you to specify various assignment generation arguments in an assignment metadata cell. ``` BEGIN ASSIGNMENT init_cell: false export_cell: true... ``` This cell is removed from both output notebooks. Any unspecified keys will keep their default values. For more information about many of these arguments, see Usage and Output. The YAML block below lists all configurations supported with R and their defaults. Any keys that appear in the Python section but not below will be ignored when using Otter Assign with R. requirements: requirements.txt # path to a requirements file for Gradescope; appended by default overwrite_requirements: false # whether to overwrite Otter's default requirements rather than appending template_pdf: false # whether to generate a manual question template PDF for Gradescope generate: # configurations for running Otter Generate; defaults to false points: null # number of points to scale assignment to on Gradescope threshold: null # a pass/fail threshold for the assignment on Gradescope show_stdout: false # whether to show grading stdout to students once grades are published show_hidden: false # whether to show hidden test results to students once grades are published files: [] # a list of file paths to include in the distribution directories 20 Chapter 3. Creating Assignments
25 3.2.2 Autograded Questions Here is an example question in an Otter Assign for R formatted notebook: For code questions, a question is a description Markdown cell, followed by a solution code cell and zero or more test code cells. The description cell must contain a code block (enclosed in triple backticks ```) that begins with BEGIN QUESTION on its own line, followed by YAML that defines metadata associated with the question. The rest of the code block within the description cell must be YAML-formatted with the following fields (in any order): name (required) - a string identifier that is a legal file name (without an extension) manual (optional) - a boolean (default false); whether to include the response cell in a PDF for manual grading points (optional) - a number or list of numbers for the point values of each question. If a list of values, each case gets its corresponding value. If a single value, the number is divided by the number of cases so that a question with \(n\) cases has test cases worth \(\frac{\text{points}}{n}\) points. As an example, the question metadata below indicates an autograded question q1 with 3 subparts worth 1, 2, and 1 points, resp. ``` BEGIN QUESTION name: q1 points: ``` 3.2. R Notebook Format 21
26 Solution Removal Solution cells contain code formatted in such a way that the assign parser replaces lines or portions of lines with prespecified prompts. The format for solution cells in R notebooks is the same as in Python notebooks, described here. Otter Assign s solution removal for prompts is compatible with normal strings in R, including assigning these to a dummy variable so that there is no undesired output below the cell: # this is OK:. = " # BEGIN PROMPT some.var <-... " # END PROMPT Test Cells The test cells are any code cells following the solution cell that begin with the comment ## Test ## or ## Hidden Test ## (case insensitive). A Test is distributed to students so that they can validate their work. A Hidden Test is not distributed to students, but is used for scoring their work. When writing tests, each test cell should be a single call to testthat::test_that and there should be no code outside of the test_that call. For example, instead of ## Test ## data = data.frame() test_that("q1a", { # some test }) do the following: ## Test ## test_that("q1a", { data = data.frame() # some test }) The removal behavior regarding questions with no solution provided holds for R notebooks Manually Graded Questions Otter Assign also supports manually-graded questions using a similar specification to the one described above. The behavior for manually graded questions in R is exactly the same as it is in Python. 3.3 RMarkdown Format Otter Assign is compatible with Otter s R autograding system and currently supports Jupyter notebook and RMarkdown master documents. The format for using Otter Assign with R is very similar to the Python format with a few important differences. 22 Chapter 3. Creating Assignments
27 3.3.1 Assignment Metadata As with Python, Otter Assign for R also allows you to specify various assignment generation arguments in an assignment metadata code block. ``` BEGIN ASSIGNMENT init_cell: false export_cell: true... ``` This block is removed from both output files. Any unspecified keys will keep their default values. For more information about many of these arguments, see Usage and Output. The YAML block below lists all configurations supported with Rmd files and their defaults. Any keys that appear in the Python or R Juptyer notebook sections but not below will be ignored when using Otter Assign with Rmd files. requirements: requirements.txt # path to a requirements file for Gradescope; appended by default overwrite_requirements: false # whether to overwrite Otter's default requirements rather than appending generate: # configurations for running Otter Generate; defaults to false points: null # number of points to scale assignment to on Gradescope threshold: null # a pass/fail threshold for the assignment on Gradescope show_stdout: false # whether to show grading stdout to students once grades are published show_hidden: false # whether to show hidden test results to students once grades are published files: [] # a list of file paths to include in the distribution directories Autograded Questions Here is an example question in an Otter Assign-formatted Rmd file: **Question 1:** Find the radius of a circle that has a 90 deg. arc of length 2. Assign this value to `ans.1` ``` BEGIN QUESTION name: q1 manual: false points: ``` ```{r} ans.1 <- 2 * 2 * pi * 2 / pi / pi / 2 # SOLUTION ``` ```{r} ## Test ## test_that("q1a", { (continues on next page) 3.3. RMarkdown Format 23
28 expect_true(ans.1 > 1) expect_true(ans.1 < 2) }) ``` (continued from previous page) ```{r} ## Hidden Test ## test_that("q1b", { tol = 1e-5 actual_answer = expect_true(ans.1 > actual_answer - tol) expect_true(ans.1 < actual_answer + tol) }) ``` For code questions, a question is a some description markup, followed by a solution code blocks and zero or more test code blocks. The description must contain a code block (enclosed in triple backticks ```) that begins with BEGIN QUESTION on its own line, followed by YAML that defines metadata associated with the question. The rest of the code block within the description cell must be YAML-formatted with the following fields (in any order): name (required) - a string identifier that is a legal file name (without an extension) manual (optional) - a boolean (default false); whether to include the response cell in a PDF for manual grading points (optional) - a number or list of numbers for the point values of each question. If a list of values, each case gets its corresponding value. If a single value, the number is divided by the number of cases so that a question with \(n\) cases has test cases worth \(\frac{\text{points}}{n}\) points. As an example, the question metadata below indicates an autograded question q1 with 3 subparts worth 1, 2, and 1 points, resp. ``` BEGIN QUESTION name: q1 points: ``` Solution Removal Solution cells contain code formatted in such a way that the assign parser replaces lines or portions of lines with prespecified prompts. The format for solution cells in Rmd files is the same as in Python and R Jupyter notebooks, described here. Otter Assign s solution removal for prompts is compatible with normal strings in R, including assigning these to a dummy variable so that there is no undesired output below the cell: # this is OK:. = " # BEGIN PROMPT some.var <-... " # END PROMPT 24 Chapter 3. Creating Assignments
29 Test Cells The test cells are any code cells following the solution cell that begin with the comment ## Test ## or ## Hidden Test ## (case insensitive). A Test is distributed to students so that they can validate their work. A Hidden Test is not distributed to students, but is used for scoring their work. When writing tests, each test cell should be a single call to testthat::test_that and there should be no code outside of the test_that call. For example, instead of ## Test ## data = data.frame() test_that("q1a", { # some test }) do the following: ## Test ## test_that("q1a", { data = data.frame() # some test }) Manually Graded Questions Otter Assign also supports manually-graded questions using a similar specification to the one described above. To indicate a manually-graded question, set manual: true in the question metadata. A manually-graded question is defined by three parts: A question metadata (Optionally) a prompt A solution Manually-graded solution cells have two formats: If the response is code (e.g. making a plot), they can be delimited by solution removal syntax as above. If the response is markup, the the solution should be wrapped in special HTML comments (see below) to indicate removal in the sanitized version. To delimit a markup solution to a manual question, wrap the solution in the HTML comments <!-- BEGIN SOLUTION --> and <!-- END SOLUTION --> on their own lines to indicate that the content in between should be removed. <!-- BEGIN SOLUTION --> solution goes here <!-- END SOLUTION --> To use a custom Markdown prompt, include a <!-- BEGIN/END PROMPT --> block with a solution block, but add NO PROMPT inside the BEGIN SOLUTION comment: <!-- BEGIN PROMPT --> prompt goes here <!-- END PROMPT --> <!-- BEGIN SOLUTION NO PROMPT --> (continues on next page) 3.3. RMarkdown Format 25
30 solution goes here <!-- END SOLUTION --> (continued from previous page) If NO PROMPT is not indicate, Otter Assign automatically replaces the solution with a line containing _Type your answer here, replacing this text._. An example of a manually-graded code question: **Question 7:** Plot $f(x) = \cos e^x$ on $[0,10]$. ``` BEGIN QUESTION name: q7 manual: true ``` ```{r} # BEGIN SOLUTION x = seq(0, 10, 0.01) y = cos(exp(x)) ggplot(data.frame(x, y), aes(x=x, y=y)) + geom_line() # END SOLUTION ``` An example of a manually-graded written question (with no prompt): **Question 5:** Simplify $\sum_{i=1}^n n$. ``` BEGIN QUESTION name: q5 manual: true ``` <!-- BEGIN SOLUTION --> $\frac{n(n+1)}{2}$ <!-- END SOLUTION --> An example of a manuall-graded written question with a custom prompt: **Question 6:** Fill in the blank. ``` BEGIN QUESTION name: q6 manual: true ``` <!-- BEGIN PROMPT --> The mitochonrida is the <!-- END PROMPT --> of the cell. <!-- BEGIN SOLUTION NO PROMPT --> powerhouse <!-- END SOLUTION --> 26 Chapter 3. Creating Assignments
31 3.4 Usage and Output Otter Assign is called using the otter assign command. This command takes in two required arguments. The first is master, the path to the master notebook (the one formatted as described above), and the second is result, the path at which output shoud be written. The optional files argument takes an arbitrary number of paths to files that should be shipped with notebooks (e.g. data files, images, Python executables). Otter Assign will automatically recognize the language of the notebook by looking at the kernel metadata; similarly, if using an Rmd file, it will automatically choose the language as R. This behavior can be overridden using the -l flag which takes the name of the language as its argument. Note: The path to the master notebook and to the result directory should be relative to the working directory, but any paths in files should be relative to the parent directory of the master notebook. To clarify, the following directory structure: dev lab lab00 - lab00.ipynb data - data.csv dist lab lab00 # THIS is where we want the results to go would be run through Otter Assign from the dev directory with otter assign lab/lab00/lab00.ipynb dist/lab/lab00 data/data.csv The default behavior of Otter Assign is to do the following: 1. Filter test cells from the master notebook and write these to test files 2. Add Otter initialization, export, and Notebook.check_all cells 3. Clear outputs and write questions (with metadata hidden), prompts, and solutions to a notebook in a new autograder directory 4. Write all tests to autograder/tests 5. Copy autograder notebook with solutions removed into a new student directory 6. Write public tests to student/tests 7. Copy files into autograder and student directories 8. Run all tests in autograder/tests on the solutions notebook to ensure they pass 9. (If generate is passed,) generate a Gradescope autograder zipfile from the autograder directory The behaviors described in step 2 can be overridden using the optional arguments described in the help specification. An important note: make sure that you run all cells in the master notebook and save it with the outputs so that Otter Assign can generate the test files based on these outputs. The outputs will be cleared in the copies generated by Otter Assign Usage and Output 27
32 3.4.1 Python Output Additions The following additional cells are included only in Python notebooks. Export Formats and Flags By default, Otter Assign adds an initialization cell at the top of the notebook with the contents # Initialize Otter import otter grader = otter.notebook() To prevent this behavior, add the --no-init-cell flag. Otter Assign also automatically adds a check-all cell and an export cell to the end of the notebook. The check-all cells consist of a Markdown cell: To double-check your work, the cell below will rerun all of the autograder tests. and a code cell that calls otter.notebook.check_all: grader.check_all() The export cells consist of a Markdown cell: ## Submission Make sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output. **Please save before submitting!** and a code cell that calls otter.notebook.export with HTML comment filtering: # Save your notebook first, then run this cell to export. grader.export("/path/to/notebook.ipynb") To prevent the inclusion of a check-all cell, use the --no-check-all flag. To prevent cell filtering in the export cell, use the --no-filter flag. To remove the export cells entirely, use the --no-export-cell tag. If you have custom instructions for submission that you want to add to the export cell, pass them to the --instructions flag. Note: Otter Assign currently only supports HTML comment filtering. This means that if you have other cells you want included in the export, you must delimit them using HTML comments, not using cell tags Otter Assign Example Consider the directory stucture below, where hw00/hw00.ipynb is an Otter Assign-formatted notebook. hw00 - hw00.ipynb - data.csv To generate the distribution versions of hw00.ipynb (after cding into hw00), I would run otter assign hw00.ipynb dist If it was an Rmd file instead, I would run 28 Chapter 3. Creating Assignments
33 otter assign hw00.rmd dist This will create a new folder called dist with autograder and student as subdirectories, as described above. hw00 - hw00.ipynb - data.csv dist autograder - hw00.ipynb tests - q1.(py R) - q2.(py R)... student - hw00.ipynb tests - q1.(py R) - q2.(py R)... If I had wanted to include data.csv in the distribution folders, I would change my call to otter assign hw00.ipynb dist data.csv The resulting directory structure would be: hw00 - hw00.ipynb - data.csv dist autograder - data.csv - hw00.ipynb tests student - data.csv - hw00.ipynb tests In generating the distribution versions, --no-run-tests flag: I can prevent Otter Assign from rerunning the tests using the otter assign --no-run-tests hw00.ipynb dist data.csv Because tests are not run on R notebooks, the above configuration would be ignored if hw00.ipynb had an R kernel. If I wanted no initialization cell and no cell filtering in the export cell, I would run otter assign --no-init-cell --no-filtering hw00.ipynb dist data.csv 3.4. Usage and Output 29
34 3.5 Otter Assign CLI Reference Create distribution versions of otter-assign-formatted notebook usage: otter assign [-h] [-l [{python,r}]] [--no-export-cell] [--no-run-tests] [--no-init-cell] [--no-check-all] [--no-pdfs] [--debug] [-r [REQUIREMENTS]] [--overwrite-requirements] master result [files [files...]] Positional Arguments master result files Notebook with solutions and tests. Directory containing the result. Other support files needed for distribution (e.g..py files, data files) Named Arguments -l, --lang --no-export-cell --no-run-tests --no-init-cell --no-check-all --no-pdfs --debug -r, --requirements Possible choices: python, r Assignment programming language; defaults to Python Don t inject an export cell into the notebook Don t run tests. Don t automatically generate an Otter init cell Don t automatically add a check_all cell Don t generate PDFs; overrides assignment config Do not ignore errors in running tests for debugging Path to requirements.txt file; ignored if no generate key in assignment metadata --overwrite-requirements Overwrite (rather than append to) default requirements for Gradescope; ignored if no REQUIREMENTS argument 30 Chapter 3. Creating Assignments
35 CHAPTER FOUR STUDENT USAGE Otter provides an IPython API and a command line tool that allow students to run checks and export notebooks within the assignment environment. 4.1 The Notebook API Otter supports in-notebook checks so that students can check their progress when working through assignments via the otter.notebook class. The Notebook takes one optional parameter that corresponds to the path from the current working directory to the directory of tests; the default for this path is./tests. import otter grader = otter.notebook() If my tests were in./hw00-tests, then I would instantiate with grader = otter.notebook("hw00-tests") Students can run tests in the test directory using Notebook.check which takes in a question identifier (the file name without the.py extension) For example, grader.check("q1") will run the test q1.py in the tests directory. If a test passes, then the cell displays All tests passed! If the test fails, then the details of the first failing test are printed out, including the test code, expected output, and actual output: 31
36 Students can also run all tests in the tests directory at once using Notebook.check_all: grader.check_all() This will rerun all tests against the current global environment and display the results for each tests concatenated into a single HTML output. It is recommended that this cell is put at the end of a notebook for students to run before they submit so that students can ensure that there are no variable name collisions, propagating errors, or other things that would cause the autograder to fail a test they should be passing. 32 Chapter 4. Student Usage
37 4.1.1 Exporting Submissions Students can also use the Notebook class to generate their own PDFs for manual grading using the method Notebook.export. This function takes an optional argument of the path to the notebook; if unspecified, it will infer the path by trying to read the config file (if present), using the path of the only notebook in the working directory if there is only one, or it will raise an error telling you to provide the path. This method creates a submission zip file that includes the notebook file, the log, and, optionally, a PDF of the notebook (set pdf=false to disable this last). As an example, if I wanted to export hw01.ipynb with cell filtering, my call would be grader.export("hw01.ipynb") as filtering is by defult on. If I instead wanted no filtering, I would use grader.export("hw01.ipynb", filtering=false) To generate just a PDF of the notebook, use Notebook.to_pdf. 4.2 Command Line Script Checker Otter also features a command line tool that allows students to run checks on Python files from the command line. otter check takes one required argument, the path to the file that is being checked, and three optional flags: -t is the path to the directory of tests. If left unspecified, it is assumed to be./tests -q is the identifier of a specific question to check (the file name without the.py extension). If left unspecified, all tests in the tests directory are run. --seed is an optional random seed for execution seeding The recommended file structure for using the checker is something like the one below: hw00 - hw00.py tests - q1.py - q2.py... After cding into hw00, if I wanted to run the test q2.py, I would run $ otter check hw00.py -q q2 All tests passed! In the example above, I passed all of the tests. If I had failed any of them, I would get an output like that below: $ otter check hw00.py -q q2 1 of 2 tests passed Tests passed: possible Tests failed: ********************************************************************* Line 2, in tests/q2.py 0 Failed example: (continues on next page) 4.2. Command Line Script Checker 33
38 1 == 1 Expected: False Got: True (continued from previous page) To run all tests at once, I would run $ otter check hw00.py Tests passed: q1 q3 q4 q5 Tests failed: ********************************************************************* Line 2, in tests/q2.py 0 Failed example: 1 == 1 Expected: False Got: True As you can see, I passed for of the five tests above, and filed q2.py. If I instead had the directory structure below (note the new tests directory name) hw00 - hw00.py hw00-tests - q1.py - q2.py... then all of my commands would be changed by adding -t hw00-tests to each call. As an example, let s rerun all of the tests again: $ otter check hw00.py -t hw00-tests Tests passed: q1 q3 q4 q5 Tests failed: ********************************************************************* Line 2, in hw00-tests/q2.py 0 Failed example: 1 == 1 Expected: False Got: True 34 Chapter 4. Student Usage
39 4.3 Otter Check Reference otter.notebook class otter.check.notebook.notebook(test_dir='./tests') Notebook class for in-notebook autograding Parameters test_dir (str, optional) path to tests directory check(question, global_env=none) Runs tests for a specific question against a global environment. If no global environment is provided, the test is run against the calling frame s environment. Parameters question (str) name of question being graded global_env (dict, optional) global environment resulting from execution of a single notebook Returns the grade for the question Return type otter.test_files.abstract_test.testcollectionresults check_all() Runs all tests on this notebook. Tests are run against the current global environment, so any tests with variable name collisions will fail. export(nb_path=none, export_path=none, pdf=true, filtering=true, pagebreaks=true, files=[], display_link=true) Exports a submission to a zipfile. Creates a submission zipfile from a notebook at nb_path, optionally including a PDF export of the notebook and any files in files. Parameters nb_path (str) path to notebook we want to export export_path (str, optional) path at which to write zipfile pdf (bool, optional) whether a PDF should be included filtering (bool, optional) False pagebreaks (bool, optional) whether pagebreaks should be included between questions files (list of str, optional) paths to other files to include in the zipfile display_link (bool, optional) whether or not to display a download link submit() Submits this notebook to an Otter Service instance if Otter Service is configured Raises AssertionError if this notebook is not configured for Otter Service to_pdf(nb_path=none, filtering=true, pagebreaks=true, display_link=true) Exports a notebook to a PDF. filter_type can be "html" or "tags" if filtering by HTML comments or cell tags, respectively. Parameters nb_path (str) Path to ipython notebook we want to export filtering (bool, optional) Set true if only exporting a subset of nb cells to PDF 4.3. Otter Check Reference 35
40 pagebreaks (bool, optional) If true, pagebreaks are included between questions display_link (bool, optional) Whether or not to display a download link otter check Checks Python file against tests usage: otter check [-h] [-q QUESTION] [-t TESTS_PATH] [--seed SEED] file Positional Arguments file Python file to grade Named Arguments -q, --question -t, --tests-path --seed Grade a specific test Path to test files Default: tests A random seed to be executed before each cell 36 Chapter 4. Student Usage
41 CHAPTER FIVE OTTER CONFIGURATION FILES In many use cases, the use of the otter.notebook class by students requires some configurations to be set. These configurations are stored in a simple JSON-formatted file ending in the.otter extension. When otter. Notebook is instantiated, it globs all.otter files in the working directory and, if any are present, asserts that there is only 1 and loads this as the configuration. If no.otter config is found, it is assumed that there is no configuration file and, therefore, only features that don t require a config file are available. The available keys in a.otter file are listed below, along with their default values. The only required key is notebook (i.e. if you use.otter file it must specify a value for notebook). { "notebook": "", // the notebook filename "endpoint": null, // the Otter Service endpoint "assignment_id": "", // assignment ID in the Otter Service database "class_id": "", // class ID in the Otter Service database "auth": "google", // the auth type for your Otter Service deployment "save_environment": false, // whether to serialize the environment in the log during checks "ignore_modules": [], // a list of modules whose functions to ignore during serialization "variables": {} // a mapping of variable names -> types to resitrct during serialization } 5.1 Configuring Otter Service The main use of a.otter file is to configure submission to an Otter Service deployment. This requires that the endpoint, assignment_id, and class_id keys be specified. The optional auth key corresponds to the name of the service s auth provider (defaulting to google). With these specified, students will be able to use Notebook. submit to send their notebook submission to the Otter Service deployment so that it can be graded. An example of this use would be: { } "notebook": "hw00.ipynb", "endpoint": " "assignment_id": "hw00", "class_id": "some_class", "auth": "google" More information can be found in Grading on a Deployable Service 37
42 5.2 Configuring the Serialization of Environments If you are using logs to grade assignemtns from serialized environments, the save_environment key must be set to true. Any module names in the ignore_modules list will have their functions ignored during serialization. If variables is specified, it should be a dictionary mapping variables names to fully-qualified types; variables will only be serialized if they are present as keys in this dictionary and their type matches the specified type. An example of this use would be: { } "notebook": "hw00.ipynb", "save_environment": true, "variables": { "df": "pandas.core.frame.dataframe", "fn": "builtins.function", "arr": "numpy.ndarray" } The function otter.utils.get_variable_type when called on an object will return this fully-qualified type string. >>> from otter.utils import get_variable_type >>> import numpy as np >>> import pandas as pd >>> get_variable_type(np.array([])) 'numpy.ndarray' >>> get_variable_type(pd.dataframe()) 'pandas.core.frame.dataframe' >>> fn = lambda x: x >>> get_variable_type(fn) 'builtins.function' More information about grading from serialized environments can be found in Logging. 38 Chapter 5. Otter Configuration Files
43 CHAPTER SIX GRADING LOCALLY The command line interface allows instructors to grade notebooks locally by launching Docker containers on the instructor s machine that grade notebooks and return a CSV of grades and (optionally) PDF versions of student submissions for manually graded questions. 6.1 Metadata Metadata files and flags are optional, but can be helpful when using the CSVs generated by Otter for grading. When organizing submissions for grading, Otter supports two major categories of exports: 3rd party collection exports, or instructor-made metadata files These two categories are broken down and described below rd Party Exports If you are using a grading service like Gradescope or an LMS like Canvas to collect submissions, Otter can interpret the export format of these 3rd party services in place of a metadata file. To use a service like Gradescope or Canvas, download the submissions for an assignment, unzip the provided file, and then proceed as described in Grading Locally using the metadata flag corresponding to your chosen service. For example, if I had a Canvas export, I would unzip it, cd into the unzipped directory, copy my tests to./tests, and run otter grade -c to grade the submissions (assuming no extra requirements, no PDF generation, and no support files required) Metadata Files If you are not using a service like Gradescope or Canvas to collect submissions, instructors can also create a simple JSON- or YAML-formatted metadata file for their students submissions and provide this to Otter. The strucure of the metadata is quite simple: it is a list of 2-item dictionaries, where each dictionary has a student identifier stored as the identifier key and the filename of the student s submission as filename. An example of each is provided below. YAML format: 39
44 - identifier: 0 filename: test-nb-0.ipynb - identifier: 1 filename: test-nb-1.ipynb - identifier: 2 filename: test-nb-2.ipynb - identifier: 3 filename: test-nb-3.ipynb - identifier: 4 filename: test-nb-4.ipynb JSON format: [ { "identifier": 0, "filename": "test-nb-0.ipynb" }, { "identifier": 1, "filename": "test-nb-1.ipynb" }, { "identifier": 2, "filename": "test-nb-2.ipynb" }, { "identifier": 3, "filename": "test-nb-3.ipynb" }, { "identifier": 4, "filename": "test-nb-4.ipynb" } ] A JSON- or YAML-formatted metadata file is specified to Otter using the -j or -y flag, respectively. Each flag requires a single argument that corresponds to the path to the metadata file. See the Basic Usage section of Grading Locally for more information. 6.2 Using the CLI Before using the command line utility, you should have written tests for the assignment downloaded submissions into a directory The grading interface, encapsulated in the otter grade command, runs the local grading process and defines the options that instructors can set when grading. A comprehensive list of flags is provided below. 40 Chapter 6. Grading Locally
45 6.2.1 Basic Usage The simplest usage of the Otter Grade is when we have a directory structure as below (and we have cded into grading in the command line) and we don t require PDFs or additional requirements. grading - meta.yml - nb0.ipynb - nb1.ipynb - nb2.ipynb... tests - q1.py - q2.py - q3.py... In the case above, our otter command would be, very simply, otter grade -y meta.yml Because the submissions are on the current working directory (grading), our tests are at./tests, and we don t mind output to./, we can use the defualt values of the -p, -t, and -o flags, leaving the only necessary flag the metadata flag. Since we have a YAML metadata file, we specify -y and pass the path to the metadata file,./meta. yml. After grader, our directory will look like this: grading - final_grades.csv - meta.yml - nb0.ipynb - nb1.ipynb - nb2.ipynb... tests - q1.py - q2.py - q3.py... and the grades for each submission will be in final_grades.csv. If we wanted to generate PDFs for manual grading, we would specify one of the three PDF flags: otter grade -y meta.yml --pdf and at the end of grading we would have grading - final_grades.csv - meta.yml - nb0.ipynb - nb1.ipynb - nb2.ipynb... submission_pdfs - nb0.pdf (continues on next page) 6.2. Using the CLI 41
46 - nb1.pdf - nb2.pdf... tests - q1.py - q2.py - q3.py... (continued from previous page) Metadata Flags The four metadata flags, -g, -c, -j, and -y, correspond to different export/metadata file formats, and are optional. Also note that the latter two require you to specify a path to the metadata file. You must specify a metadata flag every time you run Otter Grade, and you may not specify more than one. For more information about metadata and export formats, see above. If you don t specify a metadata flag, the CSV file that Otter returns will be primary keyed on the filename of the submission Requirements The ucbdsinfra/otter-grader Docker image comes preinstalled with the following Python packages and their dependencies: datascience jupyter_client ipykernel matplotlib pandas ipywidgets scipy tornado nb2pdf otter-grader If you require any packages not listed above, or among the dependencies of any packages above, you should create a requirements.txt file containing only those packages. If this file is created in the working directory (i.e../requirements.txt), then Otter will automatically find this file and include it. If this file is not at./ requirements.txt, pass its path to the -r flag. For example, continuining the example above with the package SymPy, I would create a requirements.txt grading - meta.yml - nb0.ipynb - nb1.ipynb - nb2.ipynb... - requirements.txt tests (continues on next page) 42 Chapter 6. Grading Locally
47 - q1.py - q2.py - q3.py... (continued from previous page) that lists only SymPy $ cat requirements.txt sympy Now my call, using HTML comment filtered PDF generation this time, would become otter grade --html-filter Note the lack of the -r flag; since I created my requirements file in the working directory, Otter found it automatically. Also note that I am no longer including a metadata flag; the CSV file will have a file column instead of an identifier column, primary keying each submission on the filename rather than a provided identifier Grading Python Scripts If I wanted to grade Python scripts instead of IPython notebooks, my call to Otter would only add the -s flag. Consider the directory structure below: grading - meta.yml - sub0.py - sub1.py - sub2.py... - requirements.txt tests - q1.py - q2.py - q3.py... My call to grade these submissions would be otter grade -sy meta.yml Note the lack of a PDF flag, as it doesn t make sense to convert Python files to PDFs. PDF flags only work when grading IPython Notebooks Grading otter.notebook.export Zip Files Otter Grade supports the grading the zip archives produced by otter.notebook.export. To grade these, just add the -z flags to your otter grade command and run as normal. For example, for the following grading directory, grading - meta.yml - sub0.zip - sub1.zip (continues on next page) 6.2. Using the CLI 43
48 - sub2.zip... - requirements.txt tests - q1.py - q2.py - q3.py... (continued from previous page) I would grade with otter grade -zy meta.yml Support Files Some notebooks require support files to run (e.g. data files). If your notebooks require any such files, there are two ways to get them into the container so that they are available to notebooks: specifying paths to the files with the -f flag putting them into the notebook path Suppose that my notebooks in grading required data.csv in my../data directory: data - data.csv grading - meta.yml - nb0.ipynb - nb1.ipynb - nb2.ipynb... - requirements.txt tests - q1.py - q2.py - q3.py... I could pass this data into the container using the call otter grade -y meta.yml -f../data/data.csv Or I could move (or copy) data.csv into grading: mv../data/data.csv./ data grading - data.csv - meta.yml - nb0.ipynb - nb1.ipynb - nb2.ipynb... (continues on next page) 44 Chapter 6. Grading Locally
49 - requirements.txt tests - q1.py - q2.py - q3.py... (continued from previous page) and then just run Otter as normal: otter grade All non-notebook files in the notebooks path are copied into all of the containers, so data.csv will be made available to all notebooks Intercell Seeding Otter Grader also supports intercell seeding via the --seed flag. In notebooks, NumPy and Python s random library are both seeded between every pair of code cells, so that the deterministic output can be used in writing hidden tests. In scripts, NumPy and random are seeded before the script s execution. As an example, I can pass a seed to Otter with the above directory structure with otter grade --seed Otter Grade Reference Grade assignments locally using Docker containers usage: otter grade [-h] [-p PATH] [-t TESTS_PATH] [-o OUTPUT_PATH] [-g] [-c] [-j JSON] [-y YAML] [-s] [-z] [--pdfs [{unfiltered,html}]] [-f FILES [FILES...]] [-v] [--seed SEED] [-r REQUIREMENTS] [--containers CONTAINERS] [--image IMAGE] [--no-kill] [--debug] Named Arguments -p, --path -t, --tests-path -o, --output-path -g, --gradescope -c, --canvas -j, --json Path to directory of submissions Default:./ Path to directory of tests Default:./tests/ Path to which to write output Default:./ Flag for Gradescope export Flag for Canvas export Flag for path to JSON metadata Default: False 6.3. Otter Grade Reference 45
50 -y, --yaml -s, --scripts -z, --zips --pdfs -f, --files -v, --verbose --seed -r, --requirements --containers --image --no-kill --debug Flag for path to YAML metadata Default: False Flag to incidicate grading Python scripts Whether submissions are zip files from Notebook.export Possible choices: unfiltered, html Default: False Specify support files needed to execute code (e.g. utils, data files) Flag for verbose output A random seed to be executed before each cell Flag for Python requirements file path;./requirements.txt automatically checked Default: requirements.txt Specify number of containers to run in parallel Custom docker image to run on Default: ucbdsinfra/otter-grader Do not kill containers after grading Print stdout/stderr from grading for debugging 46 Chapter 6. Grading Locally
51 CHAPTER SEVEN GRADING ON GRADESCOPE 7.1 Creating Configuration Files Otter-Grader also allows instructors to use Gradescope s autograding system to collect and grade students submissions. This section assumes that instructors are familiar with Gradescope s interface and know how to set up assignments on Gradescope; for more information on using Gradescope, see their help pages Writing Tests for Gradescope The tests that are used by the Gradescope autograder are the same as those used in other uses of Otter, but there is one important field that is relevant to Gradescope that is not pertinent to any other uses. As noted in the second bullet here, the "hidden" key of each test case indicates the visibility of that specific test case. If a student passes all tests, they are shown a successful check. If they pass all public tests but fail hidden tests, they are shown a successful check but a second output is shown below that for instructors only, showing the output of the failed test. If students fail a public test, students are shown the output of the failed test and there is no second box. For more information on how tests are displayed to students, see the next section Using the Command Line Generator To use Otter with Gradescope s autograder, you must first generate a zipfile that you will upload to Gradescope so that they can create a Docker image with which to grade submissions. Otter s command line utility otter generate allows instructors to create this zipfile from their machines. It is divided into two subcommands: autograder and token. Before Using Otter Generate Before using Otter Generate, you should already have written tests for the assignment, created a Gradescope autograder assignment, and collected extra requirements into a requirements.txt file (see here). (Note: these default requirements can be overwritten by your requirements by passing the --overwrite-requirements flag.) 47
52 Directory Structure For the rest of this page, assume that we have the following directory structure: hw00-dev - data.csv - hw00-sol.ipynb - hw00.ipynb - requirements.txt - utils.py tests - q1.py - q2.py... hidden-tests - q1.py - q2.py... Also assume that we have cded into hw00-dev. Usage The general usage of otter generate autograder is to create a zipfile at some output path (-o flag, default. /) which you will then upload to Gradescope. Otter Generate has a few optional flags, described in the Otter Generate Reference. If you do not specify -t or -o, then the defaults will be used. If you do not specify -r, Otter looks in the working directory for requirements.txt and automatically adds it if found; if it is not found, then it is assumed there are no additional requirements. There is also an optional positional argument that goes at the end of the command, files, that is a list of any files that are required for the notebook to execute (e.g. data files, Python scripts). To autograde an R assignment, pass the -l r flag to indicate that the language of the assignment is R. The simplest usage in our example would be otter generate autograder This would create a zipfile with the tests in./tests and no extra requirements or files. If we needed data.csv in the notebook, our call would instead become otter generate autograder data.csv Note that if we needed the requirements in requirements.txt, our call wouldn t change, since Otter automatically found./requirements.txt. Now let s say that we maintained to different directories of tests: tests with public versions of tests and hidden-tests with hidden versions. Because I want to grade with the hidden tests, my call then becomes otter generate autograder -t hidden-tests data.csv Now let s say that I need some functions defined in utils.py; then I would add this to the last part of my Otter Generate call: otter generate autograder -t hidden-tests data.csv utils.py If this was instead an R assignment, I would run 48 Chapter 7. Grading on Gradescope
53 otter generate autograder -t hidden-tests -l r data.csv Grading with Environments Otter can grade assignments using saved environemnts in the log in the Gradescope container. This behavior is not supported for R assignments. This works by unshelving the environment stored in each check entry of Otter s log and grading against it. The notebook is parsed and only its import statements are executed. For more inforamtion about saving and using environments, see Logging. To configure this behavior, two things are required: the use of the --grade-from-log flag when generating an autograder zipfile using an Otter configuration file with save_environments set to true This will tell Otter to shelve the global environment each time a student calls Notebook.check (pruning the environments of old calls each time it is called on the same question). When the assignment is exported using Notebook. export, the log file (at.otter_log) is also exported with the global environments. These environments are read in in the Gradescope container and are then used for grading. Because one environment is saved for each check call, variable name collisions can be averted, since each question is graded using the global environment at the time it was checked. Note that any requirements needed for execution need to be installed in the Gradescope container, because Otter s shelving mechanism does not store module objects. Autosubmission of Notebook PDFs Otter Generate allows instructors to automatically generate PDFs of students notebooks and upload these as submissions to a separate Gradescope assignment. This requires a Gradescope token, which can be obtained with otter generate token. This will ask you to log in to Gradescope with your credentials and will provide you with a token, which can be passed to the --token flag of otter generate autograder to initialize the generation of PDFs. $ otter generate token Please provide the address on your Gradescope account: some_instructor@some_ institution.edu Password: Your token is: {YOUR TOKEN} Otter Generate also needs the course ID and assignment ID of the assignment to which PDFs should be submitted. This information can be gathered from the assignment URL on Gradescope: ID}/assignments/{ASSIGNMENT ID} Currently, this action supports HTML comment filtering, but filtering can be turned off with the --unfiltered-pdfs flag Creating Configuration Files 49
54 Pass/Fail Thresholds The Gradescope generator supports providing a pass/fail threshold. A threshold is passed as a float between 0 and 1 such that if a student receives at least that percentage of points, they will receive full points as their grade and 0 points otherwise. The threshold is specified with the --threshold flag: otter generate autograder data.csv --threshold 0.75 For example, if a student passes a 2- and 1- point test but fails a 4-point test (a 43%) on a 25% threshold, they will get all 7 points. If they only pass the 1-point test (a 14%), they will get 0 points. Overriding Points Possible By default, the number of points possible on Gradescope is the sum of the point values of each test. This value can be overrided, however, to some other value using the --points flag, which accepts an integer. Then the number of points awarded will be the provided points value scaled by the percentage of points awarded by the autograder. For example, if a student passes a 2- and 1- point test but fails a 4-point test, they will receive (2 + 1) / ( ) * 2 = points out of a possible 2 when --points is set to 2. As an example, the command below scales the number of points to 3: otter generate autograder data.csv --points 3 Public Test Multiplier You can optionally specify a percentage of the points to award students for passing all of the public tests. This percentage defaults to 0 but this can be changed using the --public-multiplier flag: otter generate autograder --public-multiplier 0.5 Intercell Seeding The Gradescope autograder supports intercell seeding with the use of the --seed flag. This behavior is not supported for R assignments. Passing it an integer will cause the autograder to seed NumPy and Python s random library between every pair of code cells. This is useful for writing deterministic hidden tests. More information about Otter seeding here. As an example, I can set an intercell seed of 42 with otter generate autograder data.csv --seed Chapter 7. Grading on Gradescope
55 Showing Autograder Results The generator lastly allows intructors to specify whether or not the stdout of the grading process (anything printed to the console by the grader or the notebook) is shown to students. The stdout includes a summary of the student s test results, including the points earned and possible of public and hidden tests, as well as the visibility of tests as indicated by test["hidden"]. This behavior is turned off by default and can be turned on by passing the --show-stdout flag to otter generate autograder. otter generate autograder data.csv --show-stdout If --show-stdout is passed, the stdout will be made available to students only after grades are published on Gradescope. The same can be done for hidden test outputs using the --show-hidden flag: otter generate autograder --show-hidden The next section details more about how output on Gradescope is formatted. Generating with Otter Assign Otter Assign comes with an option to generate this zip file automatically when the distribution notebooks are created via the --generate flag. See Distributing Assignments for more details. 7.2 Grading Container Image When you use Otter Generate to create the configuration zip file for Gradescope, Otter includes the following software and packages for installation. The zip archive, when unzipped, has the following contents: autograder/ - requirements.r - requirements.txt - run_autograder - setup.sh files/ -... tests/ setup.sh This file, required by Gradescope, performs the installation of necessary software for the autograder to run. The default scripts looks like this: #!/usr/bin/env bash apt-get clean apt-get update apt-get install -y python3.7 python3-pip python3.7-dev apt-get clean apt-get update (continues on next page) 7.2. Grading Container Image 51
56 (continued from previous page) apt-get install -y pandoc apt-get install -y texlive-xetex texlive-fonts-recommended texlive-generic-recommended # install wkhtmltopdf wget --quiet -O /tmp/wkhtmltopdf.deb releases/download/ /wkhtmltox_ bionic_amd64.deb apt-get install -y /tmp/wkhtmltopdf.deb update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 1 apt-get clean apt-get update apt-get install -y install build-essential libcurl4-gnutls-dev libxml2-dev libssl-dev libcurl4-openssl-dev # install conda wget -nv -O /autograder/source/miniconda_install.sh " miniconda/miniconda3-latest-linux-x86_64.sh" chmod +x /autograder/source/miniconda_install.sh /autograder/source/miniconda_install.sh -b echo "export PATH=/root/miniconda3/bin:\$PATH" >> /root/.bashrc export PATH=/root/miniconda3/bin:$PATH export TAR="/bin/tar" # install R dependencies conda install --yes r-base r-essentials conda install --yes r-devtools -c conda-forge # install requirements pip3 install -r /autograder/source/requirements.txt pip install -r /autograder/source/requirements.txt Rscript /autograder/source/requirements.r # install ottr; not sure why it needs to happen twice but whatever git clone --single-branch -b stable / autograder/source/ottr cd /autograder/source/ottr Rscript -e "devtools::install()" Rscript -e "devtools::install()" This script does the following: 1. Install Python 3.7 and pip 2. Install pandoc and LaTeX 3. Install wkhtmltopdf 4. Set Python 3.7 to the default python version 5. Install apt-get dependencies for R 6. Install Miniconda 7. Install R, r-essentials, and r-devtools from conda 8. Install Python requirements from requirements.txt (this step installs Otter itself) 9. Install R requires from requirements.r 10. Install Otter s R autograding package ottr 52 Chapter 7. Grading on Gradescope
57 Currently this script is not customizable, but you can unzip the provided zip file to edit it manually requirements.txt This file contains the Python dependencies required to execute submissions. It is also the file that your requirements are appended to when you provided your own requirements file for a Python assignment (unless --overwrite-requirements is passed). The default requirements are: datascience jupyter_client ipykernel matplotlib pandas ipywidgets scipy seaborn sklearn jinja2 nbconvert nbformat dill rpy2 jupytext numpy== tornado==5.1.1 git+ git@bc58bf5a8d13df97e43935c262dd4f2a5c16e075 Note that this installs the exact version of Otter that you are working from. This is important to ensure that the configurations you set locally are correctly processed and processable by the version on the Gradescope container. If you choose to overwrite the requirements, make sure to do the same requirements.r This file uses R functions like install.packages and devtools::install_github to install R packages. If you are creating an R assignment, it is this file (rather than requirements.txt) that your requirements and overwrite-requirements options are applied to. The default file contains: install.packages(c( "usethis", "testthat", "startup" ), repos=" Grading Container Image 53
58 7.2.4 run_autograder This is the file that Gradescope uses to actually run the autograder when a student submits. Otter provides this file as a Python executable that defines the configurations for grading in a dictionary and then calls otter.generate. run_autograder.main: #!/usr/bin/env python3 import os import subprocess from otter.generate.run_autograder import main as run_autograder config = { "score_threshold": None, "points_possible": None, "show_stdout_on_release": False, "show_hidden_tests_on_release": False, "seed": None, "grade_from_log": False, "serialized_variables": {}, "public_multiplier": 0, "token": None, "course_id": 'None', "assignment_id": 'None', "filtering": True, "pagebreaks": True, "debug": False, "autograder_dir": '/autograder', "lang": 'python', } if name == " main ": run_autograder(config) When debuggin grading via SSH on Gradescope, a helpful tip is to set the debug key of this dict to True; this will stop the autograding from ignoring errors when running students code, and can be helpful in debugging specific submission issues tests This is a directory containing the test files that you provide. All.py (or.r) files in the tests directory path that you provide are copied into this directory and are made available to submissions when the autograder runs files This directory, not present in all autograder zip files, contains any support files that you provide to be made available to submissions. 54 Chapter 7. Grading on Gradescope
59 7.3 Gradescope Results Format This section details how the autograder runs and how results are displayed to students and instructors on Gradescope. When a student submits to Gradescpe, the autograder does the following: 1. Copies the tests and support files from the autograder source 2. Globs the first IPYNB file and assumes this to be the submission to be graded 3. Corrects instances of otter.notebook for different test paths using a regex 4. Reads in the log from the submission if it exists 5. Grades the notebook, globbing all tests and grading from the log if specified 6. Looks for discrepancies between the logged scores and the autograder scores and warngs about these if present 7. If indicated, exports the notebook to a PDF and POSTs this notebook to the other Gradescope assignment 8. Generates the JSON object for Gradescope s results 9. Makes adjustments to the scores and visibility based on the configurations 10. Writes the JSON to the results file 11. Prints the results as a dataframe to stdout Instructor View Once a student s submission has been autograded, the Autograder Results page will show the stdout of the grading process in the Autograder Output box and the student s score in the side bar to the right of the output. The stdout includes information from verifying the student s scores against the logged scores, a dataframe that contains the student s score breakdown by question, and a summary of the information about test output visibility: 7.3. Gradescope Results Format 55
60 Note that the above submission shows a discrepancy between the autograder score and the logged score (the line printed above the dataframe). If there are no discrepancies, the stdout will say so: 56 Chapter 7. Grading on Gradescope
61 Below the autograder output, each test case is broken down into boxes. Based on the passing of public and hidden tests, there are three possible cases: If a public test is failed, there is a single box for the test called {test name} - Public that displays the failed output of the test. If all public tests pass but a hidden test fails, there are two boxes: one called {test name} - Public that shows All tests passed! and a second called {test name} - Hidden that shows the failed output of the test Gradescope Results Format 57
62 If all tests pass, there are two boxes, {test name} - Public and {test name} - Hidden, that both show All tests passed!. Note that these examples award points with a public test multiplier of 0.5; if it had been 0, all of the {test name} 58 Chapter 7. Grading on Gradescope
63 - Public tests (except failed ones, which are worth full points always) would be worth 0 points Instructors will be able to see all tests. The invisibility of a test to students is indicated to instructors by the icon (all tests with this icon are hidden to students) Student View On submission, students will only be able to see the results of those test cases for which test["suites"][0]["cases"][<int>]["hidden"] evaluates to True (see Test Files for more info). If test["suites"][0]["cases"][<int>]["hidden"] is False or not specified, then that test case is hidden. If --show-stdout was specified when constructing the autograder zipfile, then the autograder output from above will be shown to students after grades are published on Gradescope. Students will not be able to see the results of hidden tests nor the tests themselves, but they will see that they failed some hidden test in the printed DataFrame from the stdout. If --show-hidden was passed, students will also see the failed otput of the failed hidden tests (again, once grades are published). 7.4 Otter Generate Reference otter generate autograder Create an autograder zip file for Gradescope usage: otter generate autograder [-h] [-t [TESTS_PATH]] [-o [OUTPUT_PATH]] [-r [REQUIREMENTS]] [--overwrite-requirements] [-l LANG] [--threshold THRESHOLD] [--points POINTS] [--show-stdout] [--show-hidden] [--seed SEED] [--token [TOKEN]] [--unfiltered-pdfs] [--no-pagebreaks] [--course-id COURSE_ID] [--assignment-id ASSIGNMENT_ID] [--grade-from-log] [--serialized-variables SERIALIZED_VARIABLES] [--public-multiplier [PUBLIC_MULTIPLIER]] [--autograder-dir [AUTOGRADER_DIR]] [files [files...]] Positional Arguments files Other support files needed for grading (e.g..py files, data files) Named Arguments -t, --tests-path -o, --output-path -r, --requirements Path to test files Default:./tests/ Path to which to write zipfile Default:./ Path to requirements.txt file;./requirements.txt automatically checked 7.4. Otter Generate Reference 59
64 --overwrite-requirements Overwrite (rather than append to) default requirements for Gradescope; ignored if no REQUIREMENTS argument -l, --lang --threshold --points --show-stdout --show-hidden --seed --token --unfiltered-pdfs --no-pagebreaks --course-id --assignment-id --grade-from-log Assignment programming language; defaults to Python Default: python Pass/fail score threshold Points possible, overrides sum of test points Show autograder test results (P/F only, no hints) after publishing grades (incl. hidden tests) Show autograder results for hidden tests after publishing grades A random seed to be executed before each cell Gradescope token for generating and uploading PDFs Default: Whether the PDFs should be unfiltered Whether the PDFs should not have page breaks between questions Gradescope course ID Gradescope assignment ID for PDFs Whether to grade assignments based on the logged environments --serialized-variables String representation of Python dict mapping variable names to full types for verification when deserializing log --public-multiplier --autograder-dir Default: {} Percentage of points to award for passing all public tests Default: 0 Root autograding directory inside grading container Default: /autograder otter generate token Get a Gradescope token usage: otter generate token [-h] 60 Chapter 7. Grading on Gradescope
65 CHAPTER EIGHT GRADING ON A DEPLOYABLE SERVICE Otter Service is a deployable grading server that students can send submissions to from in the notebook and which will grade submissions based on predefined tests. It is a service packaged with Otter but has additional requirements that are not necessary for running Otter without Otter Service. Otter Service is a tornado server that accepts requests with student submissions, writes the submission to disk, grades them, and saves the grade breakdowns in a Postgres database. This page details how to configure an already-deployed Otter Service instance, and is agnostic of deployment type. For more information on deploying an Otter Service instance, see below. 8.1 Configuration Repo Otter Service is configured to grade assignments through a repository structure. While maintaing a git repo of assignment configurations is not required, it certainly makes editing and updated your assignments easier. Any directory that has the structure described here will do to set up assignments on an Otter Service deployments. If you are grading multiple classes on an Otter Service deployment, you must have one assignments repo for each class. The first and most important file required is the conf.yml file, at the root of the repo. This file will define some important configurations and provide paths to the necessary files for each assignment. While the following structure is not required, we recommend something like it to keep a logical structure to the repo. data-8-assignments - conf.yml - requirements.txt hw hw01 - requirements.txt test - q0.py - q1.py... hw02... proj proj
66 8.1.1 The Configuration File The conf.yml file is a YAML-formatted file that defines the necessary configurations and paths to set up assignments on an Otter Service deployment. It has the following structure: class_name: Data 8 # the name of the course class_id: data8x # a **unique** ID for the course requirements: requirements.txt # Path to a **global** reuquirements file assignments: # list of assignments - name: Project 2 # assignment name assignment_id: proj02 # assignment ID, unique among all assignments in this config tests_path: proj/proj02/tests # path to directory of tests for this assignment requirements: proj/proj02/requirements.txt # path to requirements specific to **this** assignment seed: 42 # random seed for intercell seeding files: # list of files needed by the autograder, e.g. data files - proj/proj02/frontiers1.csv - proj/proj02/frontiers2.csv - proj/proj02/players.py... Note that the class_id must be unique among all classes being run on the Otter Service instance, whereas the assignment_ids must be unique only within a specific class. Also note that there are two requirements files: a global requirements file that contains requirements not already in the Docker image to be installed on all assignment images in this config, and an assignment-specific requirements file to be installed only in the image for that assignment. Also note that all paths are relative to the root of the assignments repo Other Files Other files can be included basically anywhere as long as its a subdirectory of the assignments repo, as long as their paths are specified in the conf.yml file. The assignments repo should contain any necessary requirements files, all test files, and any support files needed to execute the notebooks (e.g. data files, Python scripts). 8.2 Building the Images Once you have cloned your assignments repo onto your Otter Service deployment, use the command otter service build to add assignments to the database and create their Docker images. Note that it is often necessary to run otter service build with sudo so that you have the necessary permissions; in this section, assume that we have prepended each command with sudo -E. The builder takes one positional argument that corresponds to the path to the assignments repo; this is assumed to be./ (i.e. it is assumed to be the working directory). It also has four arguments corresponding to Posgres connection information (the host, port, username, and password which default to localhost, 5432, root, and root, resp.), an optional --image flag to change out the base image for the Docker images (defaults to ucbdsinfra/ otter-grader), and quiet -q flag to stop Docker from printing to the console on building the images. From the root of the assignments repo, building couldn t be simpler: otter service build 62 Chapter 8. Grading on a Deployable Service
67 This command will add all assignments in the conf.yml to the database and create the needed Docker images to grade submissions. Images will be tagged as {CLASS ID}-{ASSIGNMENT ID} from the conf.yml file. So the first assignment in the example above would be tagged data8x-proj Running the Service Once you have run otter service build, you are ready to start the service instance listening for requests. To do this, run otter service start, which takes an optional --port flag that indicates the port to listen on (this defaults to 80, the HTTP port). When a student submits, their submission is written to the disk and a job to grade the assignment is sent to a concurrent.futures.threadpoolexecutor. Once grading is finished, the database entry for that submission is updated with the score and the stdout and stderr of the grading process are written to text files in the submission directory. A submission is written to the following path: /otter-service/submissions/class-{class ID}/assignment-{ASSIGNMENT ID}/submission- {SUBMISSION ID}/{ASSIGNMENT NAME}.ipynb To start Otter Service, run $ otter service start Listening on port 80 If you need to run Otter Service in the background, e.g. so you can log out of your SSH session, we recommend using screen to run the process in a separate session. 8.4 Interacting with the Database In setting up your Otter Service deployment, a Postgres database called otter_db was created to hold information about classes, assignments, and scores. The database has the following schema: CREATE TABLE users ( user_id SERIAL PRIMARY KEY, api_keys VARCHAR[] CHECK (cardinality(api_keys) > 0), username TEXT UNIQUE, password VARCHAR, TEXT UNIQUE, CONSTRAINT has_username_or_ CHECK (username IS NOT NULL or IS NOT NULL) ) CREATE TABLE classes ( class_id TEXT PRIMARY KEY, class_name TEXT NOT NULL ) CREATE TABLE assignments ( assignment_id TEXT NOT NULL, class_id TEXT REFERENCES classes (class_id) NOT NULL, assignment_name TEXT NOT NULL, seed INTEGER, PRIMARY KEY (assignment_id, class_id) ) CREATE TABLE submissions ( submission_id SERIAL PRIMARY KEY, (continues on next page) 8.3. Running the Service 63
68 (continued from previous page) assignment_id TEXT NOT NULL, class_id TEXT NOT NULL, user_id INTEGER REFERENCES users(user_id) NOT NULL, file_path TEXT NOT NULL, timestamp TIMESTAMP NOT NULL, score JSONB, FOREIGN KEY (assignment_id, class_id) REFERENCES assignments (assignment_id, class_id) ) The above tables encompass all of the information that Otter Service stores. When a student sends a POST request with their submission, the submission is logged in the submissions table with a null score. Once grading finishes, the score for that submission is filled in as a JSON object that maps question identifiers to scores by question. 8.5 Deploying a Grading Service This section describes how to deploy an Otter Service instance. For more information on how to use an alreadydeployed service, see Grading on a Deployed Service VM Requirements Otter Service instances can be deployed as virtual machines on the cloud provider of your choice. We recommend using Debian as the OS on your VM. To use Otter Service, the VM requires the following to be installed: Postgres Python 3.6+ (and pip3) Docker Otter s Python requirements Once you have installed the necessary requirements, you need to create a Postgres user that Otter can use to interact with the database. We recommend, and Otter defaults to, the username root with password root. sudo -u postgres bash createuser -dp root # enter password 'root' when prompted Now that there is a root user, the database can be created: sudo -E otter service create otter service create creates the Postgres database otter_db, if it does not already exist, with the following relations: CREATE TABLE users ( user_id SERIAL PRIMARY KEY, api_keys VARCHAR[] CHECK (cardinality(api_keys) > 0), username TEXT UNIQUE, password VARCHAR, TEXT UNIQUE, CONSTRAINT has_username_or_ CHECK (username IS NOT NULL or IS NOT NULL) ) (continues on next page) 64 Chapter 8. Grading on a Deployable Service
69 CREATE TABLE classes ( class_id TEXT PRIMARY KEY, class_name TEXT NOT NULL ) (continued from previous page) CREATE TABLE assignments ( assignment_id TEXT NOT NULL, class_id TEXT REFERENCES classes (class_id) NOT NULL, assignment_name TEXT NOT NULL, seed INTEGER, PRIMARY KEY (assignment_id, class_id) ) CREATE TABLE submissions ( submission_id SERIAL PRIMARY KEY, assignment_id TEXT NOT NULL, class_id TEXT NOT NULL, user_id INTEGER REFERENCES users(user_id) NOT NULL, file_path TEXT NOT NULL, timestamp TIMESTAMP NOT NULL, score JSONB, FOREIGN KEY (assignment_id, class_id) REFERENCES assignments (assignment_id, class_id) ) Now Otter needs a few environment variables set. We recommend that you put this in your.bashrc and then call Otter using sudo -E to maintain your environment. The environment variables needed are detailed below. This can be done by appending something like the following to ~/.bashrc: export GOOGLE_CLIENT_KEY="someKey1234.apps.googleusercontent.com" export GOOGLE_CLIENT_SECRET="someSecret5678" export OTTER_ENDPOINT=" Finally, the last thing to do is allow inbound traffic on port 80 (or whatever port you pass to the --port flag of otter service start). This will allow requests to be sent to the VM for grading. Now that you have set up your VM, you re ready to start grading with it Step-by-Step Deployment Instructions This section provides step-by-step deployment instructions for deploying an Otter Service instance on different cloud providers. Azure To start deploying an Otter Service VM on Azure, first create a resource group and subscription to house your VM. Then create a new VM from the Virtual Machines menu on the Azure portal. We recommend the Debian 10 "Buster" image. The recommended size parameters (Standard D2s v3 2 vcpus, 8 GiB memory) are good for a smaller course (< 50 students), but you can scale up the memory for larger courses. You should also provide your SSH public key so that you can SSH into the VM to set it up. You will need the SSH and HTTP ports open so that you can configure the VM and so that it can accept requests Deploying a Grading Service 65
70 The other default configuations are fine, so select Review + Create, double check the details, and click Create to create the VM. Once your deployment has finished, you will need to change the public IP address from dynamic to static and secure a DNS name label. To do this, click Configure next to DNS name on the Overview page. On this page, set Assignment to Static and create a DNS name label in the DNS name label field for your VM. The latter is very important because Google Cloud will not let you use an IP address when creating your OAuth credentials. Click Save at the top to save these changes. 66 Chapter 8. Grading on a Deployable Service
71 Continue setup by SSHing into the VM from your Terminal (more info on how to do this under Connect > SSH). Run the following commands to install Postgres, pip3, Docker, and Otter and its Python requirements. sudo apt update sudo apt-get update sudo apt install -y postgresql postgresql-client python3-pip git python-psycopg2 libpq-dev nano sudo apt-get -y install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common curl -fssl sudo apt-key add - sudo apt-get update sudo apt-get install -y docker-ce docker-ce-cli containerd.io sudo pip3 install git+ sudo pip3 install -r requirements.txt Once the above have run successfully, you need to create a Postgres user that Otter can use to create and manage it database. We recommend a user with username root and password root as these are the values that Otter defaults to. $ sudo -u postgres bash $ createuser -dp root Password: At the Password: prompt, type the user password. Next, you will need to create a Google Cloud Platform project to use for Google OAuth. Once you have created the project, go to APIs & Services > OAuth consent screen and create one. You will need to add the DNS name you configured on Azure to the Authorized domains list. Click Save then go to APIs & Services > Credentials and click 8.5. Deploying a Grading Service 67
72 + Create Credentials a the top and select OAuth Client ID. Set the application type to Web application and add DNS NAME}/auth/callback to the list of authorized redirect URIs. Click Create. Google will provide you with a client ID and client secret; copy these down and SSH back into your VM. Use nano (or your preferred text editor) to open ~/.bashrc and add the following to the end of it, filling in the values with the corresponding values. export GOOGLE_CLIENT_KEY="{YOUR GOOGLE CLIENT ID}" export GOOGLE_CLIENT_SECRET="{YOUR GOOGLE CLIENT SECRET}" export OTTER_ENDPOINT="{YOUR DNS NAME}" 68 Chapter 8. Grading on a Deployable Service
73 Now source your.bashrc file and create the Postgres database for Otter with sudo -E otter service create Congrats, that s all the setup! You re ready to start grading with Otter Service, as described here. 8.6 Otter Service Reference otter service build Build images for an otter-service instance usage: otter service build [-h] [--db-host DB_HOST] [--db-port DB_PORT] [-u DB_USER] [-p DB_PASS] [--image IMAGE] [-q] repo_path Positional Arguments repo_path Path to assignments repo root Default:. Named Arguments --db-host --db-port -u, --db-user -p, --db-pass --image -q, --quiet Postgres database host Default: localhost Postgres database port Default: 5432 Postgres database user Default: root Postgres database password Default: root Based image for grading containers Default: ucbdsinfra/otter-grader Build images without writing Docker messages to stdout 8.6. Otter Service Reference 69
74 8.6.2 otter service create Create database for otter-service usage: otter service create [-h] [--db-host DB_HOST] [--db-port DB_PORT] [-u DB_USER] [-p DB_PASS] Named Arguments --db-host --db-port -u, --db-user -p, --db-pass Postgres database host Default: localhost Postgres database port Default: 5432 Postgres database user Default: root Postgres database password Default: root otter service start Start an otter-service instance usage: otter service start [-h] [-c CONFIG] [-e ENDPOINT] [--port PORT] [-k GOOGLE_KEY] [-s GOOGLE_SECRET] [--db-host DB_HOST] [--db-port DB_PORT] [-u DB_USER] [-p DB_PASS] [-l RATE_LIMIT] Named Arguments -c, --config -e, --endpoint --port -k, --google-key -s, --google-secret --db-host --db-port -u, --db-user Path to config file Address of this VM including port Port for server to listen on Default: 80 Google OAuth key; use environment variable if not specified Google OAuth secret; use environment variable if not specified Postgres database host Default: localhost Postgres database port Default: 5432 Postgres database user Default: root 70 Chapter 8. Grading on a Deployable Service
75 -p, --db-pass -l, --rate-limit Postgres database password Default: root Rate limit for submissions in seconds Default: Otter Service Reference 71
76 72 Chapter 8. Grading on a Deployable Service
77 CHAPTER NINE SUBMISSION EXECUTION This section of the documentation describes the process by which submissions are executed in the autograder. Regardless of the method by which Otter is used, the autograding internals are all the same, although the execution differs slightly based on the file type of the submission. 9.1 Notebooks If students are submitting IPython notebooks (.ipynb files), they are executed as follows: The grades for each test are then collected from the list to which they were appended in the dummy environment and any additional tests are run against this resulting environment. Running in this manner has one main advantage: it is robust to variable name collisions. If two tests rely on a variable of the same name, they will not be stymied by the variable being changed between the tests, because the results for one test are collected from when that check is called rather than being run at the end of execution. It is also possible to specify tests to run in the cell metadata without the need of explicit calls to Notebook.check. To do so, add an otter field to the cell metadata with the following structure: { } "otter": { "tests": [ "q1", "q2", "etc." ] } The strings within tests should correspond to filenames within the tests directory (without the.py extension) as would be passed to Notebook.check. Inserting this into the cell metadata will cause Otter to insert a call to a covertly imported Notebook instance with the correct filename and collect the results at that point of execution, allowing for robustness to variable name collisions without explicit calls to Notebook.check. Note that these tests are not currently seen by any checks in the notebook, so a call to Notebook.check_all will not include the results of these tests. 73
78 9.2 Scripts If students are submitting Python scripts, they are executed as follows: Similar to notebooks, the grades for tests are collected from the list in the returned environment and additional tests are run against it. 9.3 Logs If grading from logs is enabled, the logs are xecuted in the following manner: Similar to the above file types, the grades for tests are collected from the list in the returned environment and additional tests are run against it. 74 Chapter 9. Submission Execution
79 CHAPTER TEN PDF GENERATION AND FILTERING Otter includes a tool for generating PDFs of notebooks that optionally incorporates notebook filtering for generating PDFs for manual grading. There are two options for exporting PDFs: PDF via LaTeX: this uses nbconvert, pandoc, and LaTeX to generate PDFs from TeX files PDF via HTML: this uses wkhtmltopdf and the Python packages pdfkit and PyPDF2 to generate PDFs from HTML files Otter Export is used by Otter Assign to generate Gradescope PDF templates and solutions, in the Gradescope autograder to generate the PDFs of notebooks, by otter.notebook to generate PDFs. and in Otter Grade when PDFs are requested Cell Filtering Otter Export uses HTML comments in Markdown cells to perform cell filtering. Filtering is the default behavior of most exporter implementations, but not of the exporter itself. You can place HTML comments in Markdown cells to capture everything in between them in the output. To start a filtering group, place the comment <!-- BEGIN QUESTION --> whereever you want to start exporting and place <!-- END QUESTION --> at the end of the filtering group. Everything capture between these comments will be exported, and everything outside them removed. You can have multiple filtering groups in a notebook. When using Otter Export, you can also optionally add page breaks after an <!-- END QUESTION --> by setting pagebreaks=true in otter.export.export_notebook or using the corresponding flags/arguments in whichever utility is calling Otter Export Otter Export Reference Exports a Jupyter Notebook to PDF with optional filtering usage: otter export [-h] [--filtering] [--pagebreaks] [-s] [-e [{latex,html}]] [--debug] source [dest] 75
80 Positional Arguments source dest Notebook to export Path to write PDF Named Arguments --filtering --pagebreaks -s, --save -e, --exporter --debug Whether the PDF should be filtered Whether the PDF should have pagebreaks between questions Save intermediate file(s) as well Possible choices: latex, html Type of PDF exporter to use Export in debug mode 76 Chapter 10. PDF Generation and Filtering
81 CHAPTER ELEVEN INTERCELL SEEDING The Otter suite of tools supports intercell seeding, a process by which notebooks and scripts can be seeded for the generation of pseudorandom numbers, which is very advantageous for writing deterministic hidden tests. Otter implements this through the use of a --seed flag, which can be set to any integer. We discuss the inner mechanics of intercell seeding in this section, while the flags and other UI aspects are discussed in the sections corresponding to the Otter tool you re using Seeding Mechanics This section describes at a high-level how seeding is implemented in the autograder at the layer of code execution Notebooks When seeding in notebooks, both NumPy and random are seeded using an integer provided by the instructor. The seeding code is added to each cell s source before running it through the executor, meaning that the results of every cell are seeded with the same seed. For example, let s say we have the two cells below: x = 2 ** np.arange(5) and y = np.random.normal(100) they would be sent to the autograder as: np.random.seed(seed) random.seed(seed) x = 2 ** np.arange(5) and np.random.seed(seed) random.seed(seed) y = np.random.normal(100) where SEED is the seed you passed to Otter. This has two important consequences: 1. When writing assignments or using assignment generation tools like Otter Assign, the instructor must seed the solutions themselves before writing hidden tests in order to ensure they are grading the correct values. 2. Students will not have access to the random seed, so any values they compute in the notebook may be different from the results of their submission when it is run through the autograder. 77
82 With respect to (1), Otter Assign implements this behavior through the use of seeding cells that are discarded in the output. This has a natural consequence of (2), which highlights the important of writing public tests that do not rely on the use of seeds unless they are provided in the distribution notebooks themselves (but I guess that renders the use of behind-the-scenes seeding useless, doesn t it?) Python Scripts Seeding Python files is relatively more simple. The implementation is similar to that of notebooks, but the script is only seeded once, at the beginning. Thus, the Python file below: import numpy as np def sigmoid(t): return 1 / (1 + np.exp(-1 * t)) would be sent to the autograder as np.random.seed(seed) random.seed(seed) import numpy as np def sigmoid(t): return 1 / (1 + np.exp(-1 * t)) You don t need to worry about importing NumPy and random before seeding as these modules are loaded by the autograder and provided in the global env that the script is executed against Cautions In this section, we highlight a few important things that bear repeating. Make sure to use the same seed when creating assignments. Also make sure that you pass this seed to the --seed flag of any Otter tool you use. Write public tests agnostic to the seed. Students won t have access to it, remember! 78 Chapter 11. Intercell Seeding
83 CHAPTER TWELVE LOGGING In order to assist with debugging students checks, Otter automatically logs events when called from otter. Notebook or otter check. The events are logged in the following methods of otter.notebook: init _auth check check_all export submit to_pdf The events are stored as otter.logs.logentry objects which are pickled and appended to a file called. OTTER_LOG. To interact with a log, the otter.logs.log class is provided; logs can be read in using the class method Log.from_file: from otter.logs import Log log = Log.from_file(".OTTER_LOG") The log order defaults to chronological (the order in which they were appended to the file) but this can be reversed by setting the ascending argument to False. To get the most recent results for a specific question, use Log. get_results: log.get_results("q1") Note that the otter.logs.log class does not support editing the log file, only reading and interacting with it Logging Environments Whenever a student runs a check cell, Otter can store their current global environment as a part of the log. The purpose of this is twofold: 1) to allow the grading of assignments to occur based on variables whose creation requires access to resources not possessed by the grading environment, and 2) to allow instructors to debug students assignments by inspecting their global environment at the time of the check. This behavior must be preconfigured with an Otter configuration file that has its save_environment key set to true. Shelving is accomplished by using the dill library to pickle (almost) everything in the global environment, with the notable exception of modules (so libraries will need to be reimported in the instructor s environment). The environment (a dictionary) is pickled and the resulting file is then stored as a byte string in one of the fields of the log entry. 79
84 Environments can be saved to a log entry by passing the environment (as a dictionary) to LogEntry.shelve. Any variables that can t be shelved (or are ignored) are added to the unshelved attribute of the entry. from otter.logs import LogEntry entry = LogEntry() entry.shelve(globals()) The shelve method also optionally takes a parameter variables that is a dictionary mapping variable names to fully-qualified type strings. If passed, only variables whose names are keys in this dictionary and whose types match their corresponding values will be stored in the environment. This helps from serializing unnecessary objects and prevents students from injecting malicious code into the autograder. To get the type string, use the function otter.utils.get_variable_type. As an example, the type string for a pandas DataFrame is "pandas. core.frame.dataframe": >>> import pandas as pd >>> from otter.utils import get_variable_type >>> df = pd.dataframe() >>> get_variable_type(df) 'pandas.core.frame.dataframe' With this, we can tell the log entry to only shelve dataframes named df: from otter.logs import LogEntry variables = {"df": "pandas.core.frame.dataframe"} entry = LogEntry() entry.shelve(globals(), variables=variables) If you are grading from the log and are utilizing variables, you must include this dictionary as a JSON string in your configuration, otherwise the autograder will deserialize anything that the student submits. This configuration is set in two places: in the Otter configuration file that you distribute with your notebook and in the autograder. Both of these are handled for you if you use Otter Assign to generate your distribution files. To retrieve a shelved environment from an entry, use the LogEntry.unshelve method. During the process of unshelving, all functions have their globals updated to include everything in the unshelved environment and, optionally, anything in the environment passed to global_env. >>> env = entry.unshelve() # this will have everything in the shelf in it -- but not factorial >>> from math import factorial >>> env_with_factorial = entry.unshelve({"factorial": factorial}) # add factorial to all fn globals >>> "factorial" in env_with_factorial["some_fn"]. globals True >>> factorial is env_with_factorial["some_fn"]. globals ["factorial"] True See the reference below for more information about the arguments to LogEntry.shelve and LogEntry. unshelve. 80 Chapter 12. Logging
85 12.2 Debugging with the Log The log is useful to help students debug tests that they are repeatedly failing. Log entries store any errors thrown by the process tracked by that entry and, if the log is a call to otter.notebook.check, also the test results. Any errors held by the log entry can be re-thrown by calling LogEntry.raise_error: from otter.logs import Log log = Log.from_file(".OTTER_LOG") entry = log.entries[0] entry.raise_error() The test results of an entry can be returned using LogEntry.get_results: entry.get_results() 12.3 Grading from the Log As noted earlier, the environments stored in logs can be used to grade students assignments. If the grading environment does not have the dependencies necessary to run all code, the environment saved in the log entries will be used to run tests against. For example, if the execution hub has access to a large SQL server that cannot be accessed by a Gradescope grading container, these questions can still be graded using the log of checks run by the students and the environments pickled therein. To configure these pregraded questions, include an Otter configuration file in the assignment directory that defines the notebook name and that the saving of environments should be turned on: { } "notebook": "hw00.ipynb", "save_environment": true If you are restricting the variables serialized during checks, also set the variables or ignore_modules parameters. If you are grading on Gradescope, you must also tell the autograder to grade from the log using the --grade-from-log flag when running or the grade_from_log subkey of generate if using Otter Assign Otter Logs Reference otter.logs.log class otter.logs.log(entries, ascending=true) A class for reading and interacting with a log. Allows you to iterate over the entries in the log and supports integer indexing. Does not support editing the log file. Parameters entries (list of LogEntry) the list of entries for this log ascending (bool, optional) whether the log is sorted in ascending (chronological) order; default True entries the list of log entries in this log Type list of LogEntry Debugging with the Log 81
86 ascending chronological order Type bool classmethod from_file(filename, ascending=true) Loads and sorts a log from a file. Parameters filename (str) the path to the log ascending (bool, optional) whether the log should be sorted in ascending (chronological) order; default True Returns Log: the Log instance created from the file get_question_entry(question) Gets the most recent entry corresponding to the question question Parameters question (str) the question to get Returns LogEntry: the most recent log entry for question Raises QuestionNotInLogException if the question is not in the log get_questions() Returns a sorted list of all question names that have entries in this log. Returns the questions in this log Return type list of str get_results(question) Gets the most recent grading result for a specified question from this log Parameters question (str) the question name to look up Returns the most recent result for the question Return type otter.test_files.abstract_test.testcollectionresults Raises QuestionNotInLogException if the question is not found question_iterator() Returns an iterator over the most recent entries for each question. Returns the iterator Return type QuestionLogIterator sort(ascending=true) Sorts this logs entries by timestmap using LogEntry.sort_log. Parameters ascending (bool, optional) 82 Chapter 12. Logging
87 otter.logs.logentry class otter.logs.logentry(event_type, shelf=none, unshelved=[], results=[], question=none, success=true, error=none) An entry in Otter s log. Tracks event type, grading results, success of operation, and errors thrown. Parameters event_type the entry type event_type (EventType) the type of event for this entry results (list of otter.test_files.abstract_test. TestCollectionResults, optional) the results of grading if this is an EventType. CHECK record question (str, optional) success (bool, optional) whether the operation was successful error (Exception, optional) an error thrown by the process being logged if any Type EventType shelf a pickled environment stored as a bytes string Type bytes unshelved a list of variable names that were unable to be pickled during shelving Type list of str results grading results if this is an EventType.CHECK entry Type list of otter.test_files.abstract_test.testcollectionresults question question name if this is a check entry Type str success whether the operation tracked by this entry was successful Type bool error an error thrown by the tracked process if applicable Type Exception timestamp timestamp of event in UTC Type datetime.datetime flush_to_file(filename) Appends this log entry (pickled) to a file Parameters filename (str) the path to the file to append this entry get_results() Get the results stored in this log entry Otter Logs Reference 83
88 Returns the results at this entry if this is an EventType.CHECK record Return type list of otter.test_files.abstract_test. TestCollectionResults get_score_perc() Returns the percentage score for the results of this entry Returns the percentage score Return type float static log_from_file(filename, ascending=true) Reads a log file and returns a sorted list of the log entries pickled in that file Parameters filename (str) the path to the log ascending (bool, optional) whether the log should be sorted in ascending (chronological) order; default True Returns the sorted log Return type list of LogEntry raise_error() Raises the error stored in this entry Raises Exception the error stored at this entry, if present shelve(env, delete=false, filename=none, ignore_modules=[], variables=none) Stores an environment env in this log entry using dill as a bytes object in this entry as the shelf attribute. Writes names of any variables in env that are not stored to the unshelved attribute. If delete is True, old environments in the log at filename for this question are cleared before writing env. Any module names in ignore_modules will have their functions ignored during pickling. Parameters env (dict) the environment to pickle delete (bool, optional) whether to delete old environments filename (str, optional) ignore_modules (list of str, optional) module names to ignore during pickling variables (dict, optional) map of variable name to type string indicating only variables to include (all variables not in this dictionary will be ignored) Returns this entry Return type LogEntry static shelve_environment(env, variables=none, ignore_modules=[]) Pickles an environment env using dill, ignoring any functions whose module is listed in ignore_modules. Returns the pickle file contents as a bytes object and a list of variable names that were unable to be shelved/ignored during shelving. Parameters env (dict) the environment to shelve 84 Chapter 12. Logging
89 Returns variables (dict or list, optional) a map of variable name to type string indicating only variables to include (all variables not in this dictionary will be ignored) or a list of variable names to include regardless of tpye ignore_modules (list of str, optional) the module names to igonre the pickled environment and list of unshelved variable names. Return type tuple of (bytes, list of str) static sort_log(log, ascending=true) Sorts a list of log entries by timestamp Parameters log (list of LogEntry) the log to sort ascending (bool, optional) whether the log should be sorted in ascending (chronological) order; default True Returns the sorted log Return type list of LogEntry unshelve(global_env={}) Parses a bytes object stored in the shelf attribute and unpickles the object stored there using dill. Updates the globals of any functions in shelf to include elements in the shelf. Optionally includes the env passed in as global_env. Parameters global_env (dict, optional) a global env to include in unpickled function globals Returns the shelved environment Return type dict otter.logs.eventtype class otter.logs.eventtype(value) Enum of event types for log entries AUTH an auth event BEGIN_CHECK_ALL beginning of a check-all call BEGIN_EXPORT beginning of an assignment export CHECK a check of a single question END_CHECK_ALL ending of a check-all call END_EXPORT ending of an assignment export INIT initialization of an otter.notebook object Otter Logs Reference 85
90 SUBMIT submission of an assignment TO_PDF PDF export of a notebook 86 Chapter 12. Logging
91 CHAPTER THIRTEEN RESOURCES Here are some resources and examples for using Otter-Grader: Some demos for using Otter + R that demonstrate how to autograde R scripts, R Jupyter notebooks, and Rmd files, including how to use Otter Assign on the latter two formats. A quickstart guide to autograding R with Otter 87
92 88 Chapter 13. Resources
93 CHAPTER FOURTEEN CHANGELOG v1.1.2: v1.1.1: v1.1.0: v1.0.1: v1.0.0: Made requirements specification always throw an error if a user-specified path is not found Pinned nbconvert<6.0.0 as a temporary measure due to new templating issues in that release Fixed handling variable name collisions with already-tested test files Added --no-pdfs flag to Otter Assign Moved Gradescope grading to inside conda env within container due to change in Gradescope s grading image Added ability to specify additional tests to be run in cell metadata without need of explicit Notebook.check cells Fixed bug with specification of overwriting requirements in Otter Generate Changed structure of CLI into six main commands: otter assign, otter check, otter export, otter generate, otter grade, and otter service Added R autograding integrations with autograding package ottr Added Otter Assign, a forked version of jassign that works with Otter Added Otter Export, a forked version of nb2pdf and gsexport for generating PDFs of notebooks Added Otter Service, a deployable grading service that students can POST their submissions to Added logging to otter.notebook and Otter Check, incl. environment serialization for grading Changed filenames inside the package so that names match commands (e.g. otter/cli.py is now otter/ grade.py) Added intercell seeding Moved all argparse calls into otter.argparser and the logic for routing Otter commands to otter.run Made several fixes to otter check, incl. ability to grade notebooks with it otter generate and otter grade now remove tmp directories on failure Fixed otter.ok_parser.checkcallwrapper finding and patching instances of otter.notebook Changed behavior of hidden test cases to use individual case "hidden" key instead of global "hidden" key 89
94 v0.4.8: v0.4.7: Made use of metadata files in otter grade optional Added otter_ignore cell tag to flag a cell to be ignored during notebook execution added import of IPython.display.display to otter/grade.py patched Gradescope metadata parser for group submissions fix relative import issue on Gradescope (again, sigh) v0.4.6: re-release of v0.4.5 v0.4.5: v0.4.4: v0.4.3: added missing patch of otter.notebook.export in otter/grade.py added version global in otter/ init.py fixed relative import issue when running on Gradescope fixed not finding/rerunning tests on Gradescope with otter.notebook.check fixed template escape bug in otter/gs_generator.py fixed dead link in docs/gradescope.md updated to Python 3.7 in setup.sh for Gradescope made otter and otter gen CLIs find./requirements.txt automatically if it exists fix bug where GS generator fails if no -r flag specified This is the documentation for the development version of Otter Grader. For the last stable release, visit https: //otter-grader.readthedocs.io/en/stable/. Otter Grader is a light-weight, modular open-source autograder developed by the Data Science Education Program at UC Berkeley. It is designed to grade Python and R assignments for classes at any scale by abstracting away the autograding internals in a way that is compatible with any instructor s assignment distribution and collection pipeline. Otter supports local grading through parallel Docker containers, grading using the autograder platforms of 3rd-party learning management systems (LMSs), the deployment of an Otter-managed grading virtual machine, and a client package that allows students to run public checks on their own machines. Otter is designed to grade executabeles, Jupyter Notebooks, and RMarkdown documents and is compatible with a few different LMSs, including Canvas and Gradescope. Otter is managed by a command-line tool organized into six basic commands: assign, check, export, generate, grade, and service. These commands provide functionality that allows instructors to create, distribute, and grade assignments locally or using a variety of LMS integrations. Otter also allows students to run publically distributed tests while working through assignments. Otter Assign is an assignment development and distribution tool that allows instructors to create assignments with prompts, solutions, and tests in a simple notebook format that it then converts into santized versions for distribution to students and autograders. Otter Check allows students to run publically distributed tests written by instructors against their solutions as they work through assignments to verify their thought processes and design implementations. Otter Export generates PDFs with optional filtering of Jupyter notebooks for manually grading portions of assignments. 90 Chapter 14. Changelog
95 Otter Generate creates the necessary files so that instructors can autograde assignments using Gradescope s autograding platform. Otter Grade grades students assignments locally on the instructor s machine in parallel Docker containers, returning grade breakdowns as a CSV file. Otter Service is a deployable grading server that students can submit their work to which grades these submissions and can optionally upload PDFs of these submission to Gradescope for manual grading. 91
96 92 Chapter 14. Changelog
97 CHAPTER FIFTEEN INSTALLATION Otter is a Python package that is compatible with Python The PDF export internals require either LaTeX and Pandoc or wkhtmltopdf to be installed. Docker is also required to grade assignments locally, and Postgres only if you re using Otter Service. Otter s Python package can be installed using pip. To install the current stable version, install with pip install otter-grader If you are going to be autograding R, you must also install the R package using devtools::install_github: devtools::install_github("ucbds-infra/[email protected]") Installing the Python package will install the otter binary so that Otter can be called from the command line. If you are running Otter on Windows, this binary will not work. Instead, call Otter as a Python module: python3 -m otter. This will have the same commands, arguments, and behaviors as all calls to otter that are shown in the documentation Docker Otter uses Docker to create containers in which to run the students submissions. Docker and our Docker image are only required if using Otter Grade or Otter Service. Please make sure that you install Docker and pull our Docker image, which is used to grade the notebooks. To get the Docker image, run docker pull ucbdsinfra/otter-grader 93
98 94 Chapter 15. Installation
99 INDEX A ascending (otter.logs.log attribute), 82 AUTH (otter.logs.eventtype attribute), 85 B BEGIN_CHECK_ALL (otter.logs.eventtype attribute), 85 BEGIN_EXPORT (otter.logs.eventtype attribute), 85 C CHECK (otter.logs.eventtype attribute), 85 check() (otter.check.notebook.notebook method), 35 check_all() (otter.check.notebook.notebook method), 35 E END_CHECK_ALL (otter.logs.eventtype attribute), 85 END_EXPORT (otter.logs.eventtype attribute), 85 entries (otter.logs.log attribute), 81 error (otter.logs.logentry attribute), 83 event_type (otter.logs.logentry attribute), 83 EventType (class in otter.logs), 85 export() (otter.check.notebook.notebook method), 35 F flush_to_file() (otter.logs.logentry method), 83 from_file() (otter.logs.log class method), 82 G get_question_entry() (otter.logs.log method), 82 get_questions() (otter.logs.log method), 82 get_results() (otter.logs.log method), 82 get_results() (otter.logs.logentry method), 83 get_score_perc() (otter.logs.logentry method), 84 I INIT (otter.logs.eventtype attribute), 85 L Log (class in otter.logs), 81 log_from_file() (otter.logs.logentry static method), 84 LogEntry (class in otter.logs), 83 N Notebook (class in otter.check.notebook), 35 Q question (otter.logs.logentry attribute), 83 question_iterator() (otter.logs.log method), 82 R raise_error() (otter.logs.logentry method), 84 results (otter.logs.logentry attribute), 83 S shelf (otter.logs.logentry attribute), 83 shelve() (otter.logs.logentry method), 84 shelve_environment() (otter.logs.logentry static method), 84 sort() (otter.logs.log method), 82 sort_log() (otter.logs.logentry static method), 85 SUBMIT (otter.logs.eventtype attribute), 85 submit() (otter.check.notebook.notebook method), 35 success (otter.logs.logentry attribute), 83 T timestamp (otter.logs.logentry attribute), 83 TO_PDF (otter.logs.eventtype attribute), 86 to_pdf() (otter.check.notebook.notebook method), 35 U unshelve() (otter.logs.logentry method), 85 unshelved (otter.logs.logentry attribute), 83 95
CS 1133, LAB 2: FUNCTIONS AND TESTING http://www.cs.cornell.edu/courses/cs1133/2015fa/labs/lab02.pdf
CS 1133, LAB 2: FUNCTIONS AND TESTING http://www.cs.cornell.edu/courses/cs1133/2015fa/labs/lab02.pdf First Name: Last Name: NetID: The purpose of this lab is to help you to better understand functions:
Network Detective Client Connector
Network Detective Copyright 2014 RapidFire Tools, Inc. All Rights Reserved. v20140801 Overview The Network Detective data collectors can be run via command line so that you can run the scans on a scheduled
Qlik REST Connector Installation and User Guide
Qlik REST Connector Installation and User Guide Qlik REST Connector Version 1.0 Newton, Massachusetts, November 2015 Authored by QlikTech International AB Copyright QlikTech International AB 2015, All
Drupal CMS for marketing sites
Drupal CMS for marketing sites Intro Sample sites: End to End flow Folder Structure Project setup Content Folder Data Store (Drupal CMS) Importing/Exporting Content Database Migrations Backend Config Unit
Magento module Documentation
Table of contents 1 General... 4 1.1 Languages... 4 2 Installation... 4 2.1 Search module... 4 2.2 Installation in Magento... 6 2.3 Installation as a local package... 7 2.4 Uninstalling the module... 8
Pulse Secure Client. Customization Developer Guide. Product Release 5.1. Document Revision 1.0. Published: 2015-02-10
Pulse Secure Client Customization Developer Guide Product Release 5.1 Document Revision 1.0 Published: 2015-02-10 Pulse Secure, LLC 2700 Zanker Road, Suite 200 San Jose, CA 95134 http://www.pulsesecure.net
MAS 500 Intelligence Tips and Tricks Booklet Vol. 1
MAS 500 Intelligence Tips and Tricks Booklet Vol. 1 1 Contents Accessing the Sage MAS Intelligence Reports... 3 Copying, Pasting and Renaming Reports... 4 To create a new report from an existing report...
Using and Contributing Virtual Machines to VM Depot
Using and Contributing Virtual Machines to VM Depot Introduction VM Depot is a library of open source virtual machine images that members of the online community have contributed. You can browse the library
Practice Fusion API Client Installation Guide for Windows
Practice Fusion API Client Installation Guide for Windows Quickly and easily connect your Results Information System with Practice Fusion s Electronic Health Record (EHR) System Table of Contents Introduction
Computational Mathematics with Python
Boolean Arrays Classes Computational Mathematics with Python Basics Olivier Verdier and Claus Führer 2009-03-24 Olivier Verdier and Claus Führer Computational Mathematics with Python 2009-03-24 1 / 40
To reduce or not to reduce, that is the question
To reduce or not to reduce, that is the question 1 Running jobs on the Hadoop cluster For part 1 of assignment 8, you should have gotten the word counting example from class compiling. To start with, let
McAfee One Time Password
McAfee One Time Password Integration Module Outlook Web App 2010 Module version: 1.3.1 Document revision: 1.3.1 Date: Feb 12, 2014 Table of Contents Integration Module Overview... 3 Prerequisites and System
Version Control Your Jenkins Jobs with Jenkins Job Builder
Version Control Your Jenkins Jobs with Jenkins Job Builder Abstract Wayne Warren [email protected] Puppet Labs uses Jenkins to automate building and testing software. While we do derive benefit from
An Introduction to Using the Command Line Interface (CLI) to Work with Files and Directories
An Introduction to Using the Command Line Interface (CLI) to Work with Files and Directories Windows by bertram lyons senior consultant avpreserve AVPreserve Media Archiving & Data Management Consultants
WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide
WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide This document is intended to help you get started using WebSpy Vantage Ultimate and the Web Module. For more detailed information, please see
Sonatype CLM Enforcement Points - Continuous Integration (CI) Sonatype CLM Enforcement Points - Continuous Integration (CI)
Sonatype CLM Enforcement Points - Continuous Integration (CI) i Sonatype CLM Enforcement Points - Continuous Integration (CI) Sonatype CLM Enforcement Points - Continuous Integration (CI) ii Contents 1
CowCalf5. for Dummies. Quick Reference. D ate: 3/26 / 2 0 10
CowCalf5 for Dummies Quick Reference D ate: 3/26 / 2 0 10 Page 2 Email: [email protected] Page 3 Table of Contents System Requirement and Installing CowCalf5 4 Creating a New Herd 5 Start Entering Records
McAfee Cloud Identity Manager
NetSuite Cloud Connector Guide McAfee Cloud Identity Manager version 2.0 or later COPYRIGHT Copyright 2013 McAfee, Inc. All Rights Reserved. No part of this publication may be reproduced, transmitted,
Computational Mathematics with Python
Computational Mathematics with Python Basics Claus Führer, Jan Erik Solem, Olivier Verdier Spring 2010 Claus Führer, Jan Erik Solem, Olivier Verdier Computational Mathematics with Python Spring 2010 1
Title Page. Informed Filler. User s Manual. Shana Corporation 9744-45 Avenue Edmonton, Alberta, Canada T6E 5C5
Title Page Informed Filler User s Manual Shana Corporation 9744-45 Avenue Edmonton, Alberta, Canada T6E 5C5 Telephone: (780) 433-3690 Order Desk: (800) 386-7244 Fax: (780) 437-4381 E-mail: [email protected]
HP Server Automation. Software Versions: 7.8x, 9.0x, 9.1x. Software Discovery Reference Guide
HP Server Automation Software Versions: 7.8x, 9.0x, 9.1x Software Discovery Reference Guide Document Release Date: August 2011 Legal Notices Warranty The only warranties for HP products and services are
Bitrix Site Manager 4.1. User Guide
Bitrix Site Manager 4.1 User Guide 2 Contents REGISTRATION AND AUTHORISATION...3 SITE SECTIONS...5 Creating a section...6 Changing the section properties...8 SITE PAGES...9 Creating a page...10 Editing
Tips and Tricks SAGE ACCPAC INTELLIGENCE
Tips and Tricks SAGE ACCPAC INTELLIGENCE 1 Table of Contents Auto e-mailing reports... 4 Automatically Running Macros... 7 Creating new Macros from Excel... 8 Compact Metadata Functionality... 9 Copying,
Webapps Vulnerability Report
Tuesday, May 1, 2012 Webapps Vulnerability Report Introduction This report provides detailed information of every vulnerability that was found and successfully exploited by CORE Impact Professional during
UOFL SHAREPOINT ADMINISTRATORS GUIDE
UOFL SHAREPOINT ADMINISTRATORS GUIDE WOW What Power! Learn how to administer a SharePoint site. [Type text] SharePoint Administrator Training Table of Contents Basics... 3 Definitions... 3 The Ribbon...
Importing and Exporting With SPSS for Windows 17 TUT 117
Information Systems Services Importing and Exporting With TUT 117 Version 2.0 (Nov 2009) Contents 1. Introduction... 3 1.1 Aim of this Document... 3 2. Importing Data from Other Sources... 3 2.1 Reading
Force.com Migration Tool Guide
Force.com Migration Tool Guide Version 35.0, Winter 16 @salesforcedocs Last updated: October 29, 2015 Copyright 2000 2015 salesforce.com, inc. All rights reserved. Salesforce is a registered trademark
FileBench's Multi-Client feature
FileBench's Multi-Client feature Filebench now includes facilities to synchronize workload execution on a set of clients, allowing higher offered loads to the server. While primarily intended for network
Alteryx Predictive Analytics for Oracle R
Alteryx Predictive Analytics for Oracle R I. Software Installation In order to be able to use Alteryx s predictive analytics tools with an Oracle Database connection, your client machine must be configured
ShoreTel Active Directory Import Application
INSTALLATION & USER GUIDE ShoreTel Active Directory Import Application ShoreTel Professional Services Introduction The ShoreTel Active Directory Import Application allows customers to centralize and streamline
Simply Accounting Intelligence Tips and Tricks Booklet Vol. 1
Simply Accounting Intelligence Tips and Tricks Booklet Vol. 1 1 Contents Accessing the SAI reports... 3 Running, Copying and Pasting reports... 4 Creating and linking a report... 5 Auto e-mailing reports...
HELP DESK MANUAL INSTALLATION GUIDE
Help Desk 6.5 Manual Installation Guide HELP DESK MANUAL INSTALLATION GUIDE Version 6.5 MS SQL (SQL Server), My SQL, and MS Access Help Desk 6.5 Page 1 Valid as of: 1/15/2008 Help Desk 6.5 Manual Installation
SQL Injection Attack Lab Using Collabtive
Laboratory for Computer Security Education 1 SQL Injection Attack Lab Using Collabtive (Web Application: Collabtive) Copyright c 2006-2011 Wenliang Du, Syracuse University. The development of this document
SECTION 5: Finalizing Your Workbook
SECTION 5: Finalizing Your Workbook In this section you will learn how to: Protect a workbook Protect a sheet Protect Excel files Unlock cells Use the document inspector Use the compatibility checker Mark
Introduction to Synoptic
Introduction to Synoptic 1 Introduction Synoptic is a tool that summarizes log files. More exactly, Synoptic takes a set of log files, and some rules that tell it how to interpret lines in those logs,
Developing In Eclipse, with ADT
Developing In Eclipse, with ADT Android Developers file://v:\android-sdk-windows\docs\guide\developing\eclipse-adt.html Page 1 of 12 Developing In Eclipse, with ADT The Android Development Tools (ADT)
SAP NetWeaver AS Java
Chapter 75 Configuring SAP NetWeaver AS Java SAP NetWeaver Application Server ("AS") Java (Stack) is one of the two installation options of SAP NetWeaver AS. The other option is the ABAP Stack, which is
Forms Printer User Guide
Forms Printer User Guide Version 10.51 for Dynamics GP 10 Forms Printer Build Version: 10.51.102 System Requirements Microsoft Dynamics GP 10 SP2 or greater Microsoft SQL Server 2005 or Higher Reporting
Braindumps.C2150-810.50 questions
Braindumps.C2150-810.50 questions Number: C2150-810 Passing Score: 800 Time Limit: 120 min File Version: 5.3 http://www.gratisexam.com/ -810 IBM Security AppScan Source Edition Implementation This is the
SQL2report User Manual
SQL2report User Manual Version 0.4.2 Content of the document 1. Versions... 3 2. Installation... 5 3. Use Manual... 8 3.1. Sql2report Manager... 8 3.1.1. Reports.... 8 3.1.2. Filters.... 15 3.1.3. Groups....
ACCESS 2007. Importing and Exporting Data Files. Information Technology. MS Access 2007 Users Guide. IT Training & Development (818) 677-1700
Information Technology MS Access 2007 Users Guide ACCESS 2007 Importing and Exporting Data Files IT Training & Development (818) 677-1700 [email protected] TABLE OF CONTENTS Introduction... 1 Import Excel
Item Audit Log 2.0 User Guide
Item Audit Log 2.0 User Guide Item Audit Log 2.0 User Guide Page 1 Copyright Copyright 2008-2013 BoostSolutions Co., Ltd. All rights reserved. All materials contained in this publication are protected
Oracle Forms Services Secure Web.Show_Document() calls to Oracle Reports
Oracle Forms Services Secure Web.Show_Document() calls to Oracle Reports $Q2UDFOH7HFKQLFDO:KLWHSDSHU )HEUXDU\ Secure Web.Show_Document() calls to Oracle Reports Introduction...3 Using Web.Show_Document
TZWorks Windows Event Log Viewer (evtx_view) Users Guide
TZWorks Windows Event Log Viewer (evtx_view) Users Guide Abstract evtx_view is a standalone, GUI tool used to extract and parse Event Logs and display their internals. The tool allows one to export all
Flight Workflow User's Guide. Release 12.0.0
Flight Workflow User's Guide Release 12.0.0 Copyright 2015 Signiant Inc. All rights reserved. Contents CHAPTER 1 Flight Introduction 4 FlightUploadReference 4 FlightDownloadReference 4 Cloud Storage Configuration
WESTERNACHER OUTLOOK E-MAIL-MANAGER OPERATING MANUAL
TABLE OF CONTENTS 1 Summary 3 2 Software requirements 3 3 Installing the Outlook E-Mail Manager Client 3 3.1 Requirements 3 3.1.1 Installation for trial customers for cloud-based testing 3 3.1.2 Installing
Shipbeat Magento Module. Installation and user guide
Shipbeat Magento Module Installation and user guide This guide explains how the Shipbeat Magento Module is installed, used and uninstalled from your Magento Community Store. If you have questions or need
Authoring for System Center 2012 Operations Manager
Authoring for System Center 2012 Operations Manager Microsoft Corporation Published: November 1, 2013 Authors Byron Ricks Applies To System Center 2012 Operations Manager System Center 2012 Service Pack
DocumentsCorePack for MS CRM 2011 Implementation Guide
DocumentsCorePack for MS CRM 2011 Implementation Guide Version 5.0 Implementation Guide (How to install/uninstall) The content of this document is subject to change without notice. Microsoft and Microsoft
Java Language Tools COPYRIGHTED MATERIAL. Part 1. In this part...
Part 1 Java Language Tools This beginning, ground-level part presents reference information for setting up the Java development environment and for compiling and running Java programs. This includes downloading
PHP Integration Kit. Version 2.5.1. User Guide
PHP Integration Kit Version 2.5.1 User Guide 2012 Ping Identity Corporation. All rights reserved. PingFederate PHP Integration Kit User Guide Version 2.5.1 December, 2012 Ping Identity Corporation 1001
SPHOL326: Designing a SharePoint 2013 Site. Hands-On Lab. Lab Manual
2013 SPHOL326: Designing a SharePoint 2013 Site Hands-On Lab Lab Manual This document is provided as-is. Information and views expressed in this document, including URL and other Internet Web site references,
Ipswitch Client Installation Guide
IPSWITCH TECHNICAL BRIEF Ipswitch Client Installation Guide In This Document Installing on a Single Computer... 1 Installing to Multiple End User Computers... 5 Silent Install... 5 Active Directory Group
Google Calendar 3 Version 0.8 and 0.9 Installation and User Guide. Preliminary Setup
Preliminary Setup You first need to establish and create google credentials before the plugin can access your calendar. Follow the instructions in the document Setting up a Google V3 API Service Account
Simulation Tools. Python for MATLAB Users I. Claus Führer. Automn 2009. Claus Führer Simulation Tools Automn 2009 1 / 65
Simulation Tools Python for MATLAB Users I Claus Führer Automn 2009 Claus Führer Simulation Tools Automn 2009 1 / 65 1 Preface 2 Python vs Other Languages 3 Examples and Demo 4 Python Basics Basic Operations
Tutorial. Reference http://www.openflowswitch.org/foswiki/bin/view/openflow/mininetgettingstarted for more thorough Mininet walkthrough if desired
Setup Tutorial Reference http://www.openflowswitch.org/foswiki/bin/view/openflow/mininetgettingstarted for more thorough Mininet walkthrough if desired Necessary Downloads 1. Download VM at http://www.cs.princeton.edu/courses/archive/fall10/cos561/assignments/cos561tutorial.zip
Chapter A5: Creating client files and attaching bank accounts
Chapter A5: Creating client files and attaching bank accounts This chapter is aimed at BankLink Administrators It covers the set up of your BankLink Practice clients. A BankLink Practice user needs BankLink
Web Application Firewall
Web Application Firewall Getting Started Guide August 3, 2015 Copyright 2014-2015 by Qualys, Inc. All Rights Reserved. Qualys and the Qualys logo are registered trademarks of Qualys, Inc. All other trademarks
Exercise 1: Python Language Basics
Exercise 1: Python Language Basics In this exercise we will cover the basic principles of the Python language. All languages have a standard set of functionality including the ability to comment code,
LabVIEW Day 6: Saving Files and Making Sub vis
LabVIEW Day 6: Saving Files and Making Sub vis Vern Lindberg You have written various vis that do computations, make 1D and 2D arrays, and plot graphs. In practice we also want to save that data. We will
Command Line Interface User Guide for Intel Server Management Software
Command Line Interface User Guide for Intel Server Management Software Legal Information Information in this document is provided in connection with Intel products. No license, express or implied, by estoppel
User Guide. Version R91. English
AuthAnvil User Guide Version R91 English August 25, 2015 Agreement The purchase and use of all Software and Services is subject to the Agreement as defined in Kaseya s Click-Accept EULATOS as updated from
Introduction to SharePoint For Team Site Owner/Administrators. Instructional Guide
Instructional Guide Class Goals: 1. Understanding & Navigating the SP Team Site Structure 2. Using SP to create & maintain a collaborative site for your team: Planning & Design, Lists, Libraries, Web Parts
tools that make every developer a quality expert
tools that make every developer a quality expert Google: www.google.com Copyright 2006-2010, Google,Inc.. All rights are reserved. Google is a registered trademark of Google, Inc. and CodePro AnalytiX
Advanced Workflow Concepts Using SharePoint Designer 2010
Instructional Brief Advanced Workflow Concepts Using SharePoint Designer 2010 SharePoint User Group September 8th, 2011 This document includes data that shall not be redistributed outside of the University
Visual Logic Instructions and Assignments
Visual Logic Instructions and Assignments Visual Logic can be installed from the CD that accompanies our textbook. It is a nifty tool for creating program flowcharts, but that is only half of the story.
Nintex Forms 2013 Help
Nintex Forms 2013 Help Last updated: Friday, April 17, 2015 1 Administration and Configuration 1.1 Licensing settings 1.2 Activating Nintex Forms 1.3 Web Application activation settings 1.4 Manage device
Application Interface Services Server for Mobile Enterprise Applications Configuration Guide Tools Release 9.2
[1]JD Edwards EnterpriseOne Application Interface Services Server for Mobile Enterprise Applications Configuration Guide Tools Release 9.2 E61545-01 October 2015 Describes the configuration of the Application
Unicenter NSM Integration for Remedy (v 1.0.5)
Unicenter NSM Integration for Remedy (v 1.0.5) The Unicenter NSM Integration for Remedy package brings together two powerful technologies to enable better tracking, faster diagnosis and reduced mean-time-to-repair
Shop Manager Manual ConfigBox 3.0 for Magento
Shop Manager Manual ConfigBox 3.0 for Magento Table of Contents 1 INTRODUCTION... 4 2 INSTALLATION... 5 2.1 How to check if ioncube Loader is installed... 5 2.1.1 What to do if ioncube Loader is not installed...
Eventia Log Parsing Editor 1.0 Administration Guide
Eventia Log Parsing Editor 1.0 Administration Guide Revised: November 28, 2007 In This Document Overview page 2 Installation and Supported Platforms page 4 Menus and Main Window page 5 Creating Parsing
Advanced Tornado TWENTYONE. 21.1 Advanced Tornado. 21.2 Accessing MySQL from Python LAB
21.1 Advanced Tornado Advanced Tornado One of the main reasons we might want to use a web framework like Tornado is that they hide a lot of the boilerplate stuff that we don t really care about, like escaping
v7.1 SP2 What s New Guide
v7.1 SP2 What s New Guide Copyright 2012 Sage Technologies Limited, publisher of this work. All rights reserved. No part of this documentation may be copied, photocopied, reproduced, translated, microfilmed,
IBM BPM V8.5 Standard Consistent Document Managment
IBM Software An IBM Proof of Technology IBM BPM V8.5 Standard Consistent Document Managment Lab Exercises Version 1.0 Author: Sebastian Carbajales An IBM Proof of Technology Catalog Number Copyright IBM
PCRecruiter Resume Inhaler
PCRecruiter Resume Inhaler The PCRecruiter Resume Inhaler is a stand-alone application that can be pointed to a folder and/or to an email inbox containing resumes, and will automatically extract contact
Objectives. At the end of this chapter students should be able to:
NTFS PERMISSIONS AND SECURITY SETTING.1 Introduction to NTFS Permissions.1.1 File Permissions and Folder Permission.2 Assigning NTFS Permissions and Special Permission.2.1 Planning NTFS Permissions.2.2
Web Forms for Marketers 2.3 for Sitecore CMS 6.5 and
Web Forms for Marketers 2.3 for Sitecore CMS 6.5 and later User Guide Rev: 2013-02-01 Web Forms for Marketers 2.3 for Sitecore CMS 6.5 and later User Guide A practical guide to creating and managing web
HR Onboarding Solution
HR Onboarding Solution Installation and Setup Guide Version: 3.0.x Compatible with ImageNow Version: 6.7.x Written by: Product Documentation, R&D Date: November 2014 2014 Perceptive Software. All rights
E-mail Listeners. E-mail Formats. Free Form. Formatted
E-mail Listeners 6 E-mail Formats You use the E-mail Listeners application to receive and process Service Requests and other types of tickets through e-mail in the form of e-mail messages. Using E- mail
WIRIS quizzes web services Getting started with PHP and Java
WIRIS quizzes web services Getting started with PHP and Java Document Release: 1.3 2011 march, Maths for More www.wiris.com Summary This document provides client examples for PHP and Java. Contents WIRIS
Unix Shell Scripts. Contents. 1 Introduction. Norman Matloff. July 30, 2008. 1 Introduction 1. 2 Invoking Shell Scripts 2
Unix Shell Scripts Norman Matloff July 30, 2008 Contents 1 Introduction 1 2 Invoking Shell Scripts 2 2.1 Direct Interpretation....................................... 2 2.2 Indirect Interpretation......................................
Saruman Documentation
Saruman Documentation Release 0.3.0 Tycho Tatitscheff January 05, 2016 Contents 1 Saruman 3 1.1 Most important Urls.................................... 3 1.2 Technologies used.....................................
Community Edition 3.3. Getting Started with Alfresco Explorer Document Management
Community Edition 3.3 Getting Started with Alfresco Explorer Document Management Contents Copyright... 3 Introduction... 4 Important notes...4 Starting with Explorer... 5 Toolbar... 5 Sidebar...6 Working
Interactive Reporting Emailer Manual
Brief Overview of the IR Emailer The Interactive Reporting Emailer allows a user to schedule their favorites to be emailed to them on a regular basis. It accomplishes this by running once per day and sending
Android: Setup Hello, World: Android Edition. due by noon ET on Wed 2/22. Ingredients.
Android: Setup Hello, World: Android Edition due by noon ET on Wed 2/22 Ingredients. Android Development Tools Plugin for Eclipse Android Software Development Kit Eclipse Java Help. Help is available throughout
Contents CHAPTER 1 IMail Utilities
Contents CHAPTER 1 IMail Utilities CHAPTER 2 Collaboration Duplicate Entry Remover... 2 CHAPTER 3 Disk Space Usage Reporter... 3 CHAPTER 4 Forward Finder... 4 CHAPTER 5 IMAP Copy Utility... 5 About IMAP
MATLAB Programming. Problem 1: Sequential
Division of Engineering Fundamentals, Copyright 1999 by J.C. Malzahn Kampe 1 / 21 MATLAB Programming When we use the phrase computer solution, it should be understood that a computer will only follow directions;
Introduction to ProForm Rapid elearning Studio. What is ProForm? The ProForm Course Authoring Tool allows you to quickly create
Introduction to ProForm Rapid elearning Studio The ProForm Rapid elearning Studio includes the ProForm Course Authoring Tool, the SWiSH Rapid Animation Tool, and the RapidCam Screen Recording Tool. This
AdRadionet to IBM Bluemix Connectivity Quickstart User Guide
AdRadionet to IBM Bluemix Connectivity Quickstart User Guide Platform: EV-ADRN-WSN-1Z Evaluation Kit, AdRadionet-to-IBM-Bluemix-Connectivity January 20, 2015 Table of Contents Introduction... 3 Things
Learn how to create web enabled (browser) forms in InfoPath 2013 and publish them in SharePoint 2013. InfoPath 2013 Web Enabled (Browser) forms
Learn how to create web enabled (browser) forms in InfoPath 2013 and publish them in SharePoint 2013. InfoPath 2013 Web Enabled (Browser) forms InfoPath 2013 Web Enabled (Browser) forms Creating Web Enabled
Computational Mathematics with Python
Numerical Analysis, Lund University, 2011 1 Computational Mathematics with Python Chapter 1: Basics Numerical Analysis, Lund University Claus Führer, Jan Erik Solem, Olivier Verdier, Tony Stillfjord Spring
User s Guide for the Texas Assessment Management System
User s Guide for the Texas Assessment Management System Version 8.3 Have a question? Contact Pearson s Austin Operations Center. Call 800-627-0225 for technical support Monday Friday, 7:30 am 5:30 pm (CT),
Using Windows Task Scheduler instead of the Backup Express Scheduler
Using Windows Task Scheduler instead of the Backup Express Scheduler This document contains a step by step guide to using the Windows Task Scheduler instead of the Backup Express Scheduler. Backup Express
Setting Up a CLucene and PostgreSQL Federation
Federated Desktop and File Server Search with libferris Ben Martin Abstract How to federate CLucene personal document indexes with PostgreSQL/TSearch2. The libferris project has two major goals: mounting
Getting Started with STATISTICA Enterprise Programming
Getting Started with STATISTICA Enterprise Programming 2300 East 14th Street Tulsa, OK 74104 Phone: (918) 749 1119 Fax: (918) 749 2217 E mail: mailto:[email protected] Web: www.statsoft.com
An Introduction to Using the Command Line Interface (CLI) to Work with Files and Directories
An Introduction to Using the Command Line Interface (CLI) to Work with Files and Directories Mac OS by bertram lyons senior consultant avpreserve AVPreserve Media Archiving & Data Management Consultants
Using Form Scripts in WEBPLUS
Using Form Scripts in WEBPLUS In WEBPLUS you have the built-in ability to create forms that can be sent to your email address via Serif Web Resources. This is a nice simple option that s easy to set up,
QUICK START FOR COURSES: USING BASIC COURSE SITE FEATURES
collab.virginia.edu UVACOLLAB QUICK START FOR COURSES: USING BASIC COURSE SITE FEATURES UVaCollab Quick Start Series [email protected] Revised 5/20/2015 Quick Start for Courses Overview... 4
Hands-on Practice. Hands-on Practice. Learning Topics
Using Microsoft PowerPoint Software on a SMART Board Interactive Whiteboard You make presentations for a reason to communicate a message. With a SMART Board interactive whiteboard, you can focus the attention
