Item Response Resource

Item Responses in Surpass relate to exactly how a candidate responded to a single item in a specific test and what they were awarded for that. Currently we support this model for Multiple Choice, Multiple Response and Either Or question types. We also have a collection for unsupported question types which provides high level information such as if it was attempted and the awarded mark.    

The purpose of this resource is to make the raw data that we currently supply in the result request more useful to the consuming application and therefore provide more information that can be used in external auditing and reporting systems. 

Item Response Requests

The Item Response resource extends the result, analytics and historical result resources and therefore can only be called if the relevant result information has been supplied in the request URL. If the Item Response resource is supplied without a specific Item Authoring reference then all Item Response URLs for a particular result are returned. This will then require another request to each individual item response to retrieve the desired model. An example of these requests have been supplied below:

https://....surpass.com/api/v2/result/{keycode}/itemResponse/
https://....surpass.com/api/v2/result/{keycode}/itemResponse/{ItemAuthoringId}
 
Additionally to this we also allow users to request the Item Response model for all items within a particular result by passing the parameter 'showItemResponses=true' as part of the result request URL. This will be returned within the Item collection. An example of this request has been provided below:

https://....surpass.com/api/v2/result/{keycode}?showItemResponses=true

Below we have provided all possible GET Item Response request URLs:

  • api/v2/TestSession/{keycode}/ItemMarks
  • api/v2/TestSession/{keycode}/ItemResponses
  • api/v2/result/{keycode}/itemResponse/   
  • api/v2/result/{id}/itemResponse/ 
  • api/v2/result/{keycode}/itemResponse/{ItemAuthoringId} 
  • api/v2/result/{id}/itemResponse/{ItemAuthoringId}
  • api/v2/AnalyticsResult/{keycode}/itemResponse/  
  • api/v2/AnalyticsResult/{id}/itemResponse/
  • api/v2/AnalyticsResult/{keycode}/itemResponse/{ItemAuthoringId}
  • api/v2/AnalyticsResult/{id}/itemResponse/{ItemAuthoringId}

  • api/v2/HistoricalResult/{id}/itemResponse/   
  • api/v2/HistoricalResult/{id}/itemResponse/{ItemAuthoringId}
An example XML and JSON request and response for all itemResponse returned for one keycode can be found by selecting the below links:

XML example
JSON example

Item Response models - Question types

The Item Response model is built to be consistent with the item resource so within the response we return a separate collection for each supported question type. This is because the model can differ between question types. Below we have defined the model and provided examples for each of these question types. At the itemResponse level of the response we provide only a list of the question types as collections (empty if it is not relevant) and the href for item response.


Multiple Choice Questions

Property Name Type Description
optionList collection A collection of the available answer options for the item
optionList / id int The unique identifier for each answer option
optionList / correct boolean A flag to identify if the answer option has been identified as correct
optionList / selected boolean A flag to identify which option the candidate has selected
optionList / htmlText XML The answer option text including the HTML formatting applied
optionList / label string The label assigned to the answer option by the item author
optionList / weightedMark int Any weighted mark that has been applied to an answer option
optionList / requiresExtraInfo boolean A flag to identify if an answer box has been enabled for the answer option in ‘Item Authoring’. If enabled, the candidate must add further information if they select that answer option.

Note: Can only apply to Multiple Choice survey questions in HTML delivery.

optionList / extraInfo string The information the candidate has provided in the answer box.

Note: Can only apply to Multiple Choice survey questions in HTML delivery.

optionList / extraInfoLabel string The label that appears above the answer box for candidates in delivery. This is defined in ‘Item Authoring’.

Note: Can only apply to Multiple Choice survey questions in HTML delivery.

userMark int The mark awarded for the item between 0-1
markType enumeration The mark type for the item. Standard or Weighted are the available options
awardedMark int The mark awarded for the item
id stringThe unique Surpass Item Authoring reference for this item
weighting int The weighting applied to this question on the page
attempted boolean If the candidate has attempted the item
  • optionList / weightedMark will return null if MarkType is Standard
  • optionList / correct will return null if markType is Weighted
Examples of the multiple choice questions item response model for both JSON and XML can be found by selecting the below links:

XML Example
JSON Example


Multiple Response Questions
Property Name Type Description
combinations collection If combination answer options are used a list of the authored groups
combinations / mark int The mark assigned to that combination of answer options
combinations / selected boolean The combination that was selected
combinations / options collection The answer options within that combination group
combinations / options / id int The ID of the answer options in that combination group
partialMarks boolean This flag identifies if a candidate can receive partial marks if they get
one answer correct
optionList collection A collection of the available answer options for the item
optionList / id id The unique identifier for each answer option
optionList / correct boolean A flag to identify if the answer option has been identified as correct
optionList / selected boolean A flag to identify which option the candidate has selected
optionList / htmlText XML The answer option text including the HTML formatting applied
optionList / label string The label assigned to the answer option by the item author
optionList / weightedMark int Any weighted mark that has been applied to an answer option
userMark int The mark awarded for the item between 0-1
markType Enumeration The mark type for the item. Standard or Combination are the available options
awardedMark int The mark awarded for the item
id string The unique Surpass Item Authoring reference for this item
weighting int The weighting applied to this question on the page
attempted boolean If the candidate has attempted the item
  • The combinations collection will be empty unless MarkType is Combination
  • If combination mark type is selected then there will not be a correct answer in the option list
  • partialMark is not relevant is combination is selected
  • weightedMark remains null and is not relevant to this model

Examples of the multiple response questions item response model for both JSON and XML can be found by selecting the below links:

XML Example
JSON Example  


Either / Or Questions
Property Name Type Description
userMark int If combination answer options are used a list of the authored groups
optionList collection A collection of the available answer options for the item
optionList / id id The unique identifier for each answer option
optionList / correct boolean A flag to identify if the answer option has been identified as correct
optionList / selected boolean A flag to identify which option the candidate has selected
optionList / htmlText XML The answer option text including the HTML formatting applied
optionList / label string The label assigned to the answer option by the item author
awardedMark int The mark awarded for the item
id string The unique Surpass Item Authoring reference for this item
weighting int The weighting applied to this question on the page
attempted boolean If the candidate has attempted the item
Examples of the either or questions item response model for both JSON and XML can be found by selecting the below links:

XML Example
JSON Example  

File Attach Questions
Property Name Type Description
userMark int If combination answer options are used a list of the authored groups
awardedMark int The mark awarded for the item
id string The unique Surpass Item Authoring reference for this item
weighting int The weighting applied to this question on the page
attempted boolean If the candidate has attempted the item
files resource The group of files the candidate uploaded.
files / fileName string The name of the file given by the candidate in delivery.
files / content Base64 encoded string The content of the file uploaded by the candidate.

Unsupported Question Types
Property Name Type Description
awardedMark int The mark awarded for the item
id string The unique Surpass Item Authoring reference for this item
weighting int The weighting applied to this question on the page
attempted boolean If the candidate has attempted the item
Examples of unsupported question types for both JSON and XML can be found by selecting the below links:

XML Example
JSON Example 

Update to Marks and Responses for the TestSession

For tests that have been flagged has 'Take on Paper' or 'Upload Responses' marked as 'true' on the Test Session, below are two methods to upload results. The Item Responses for MCQ, MRQ and Either/Or questions in a test and upload Item Marks for all question types.

POST marks for a TestSession

Using this method, you can post marks for any item type for TestSessions where "uploadResponses" field is marked 'true' and where marking has taken place externally to Surpass.
Property Name Type Description
itemId string The Surpass item ID for the test.
mark int The mark the candidate achieved for an item.
notAttempted boolean Shows whether or not the candidate attempted the question. If marked as 'true' the candidate will achieve no marks.
POST candidate responses for a TestSession (OMR only)

Upload the candidate responses to questions in a TestSession, this is only applicable for tests contain MCQ, MRQ and Either/Or questions.
Property Name Type Description
questionNumber string The position of the question in the test.
answer string The response provided by the candidate to the item.

Feedback and Knowledge Base