NAV Navbar
http
  • Introduction
  • Usage
  • Authentication
  • Extracts
  • Reducer Configuration
  • Reduction Mode Examples
  • Subject Metadata
  • Rules
  • How to do SWAP
  • Errors
  • Introduction

    Caesar is an evolution of the Nero codebase, which is made more generic. In essence, Caesar receives classifications from the event stream (a Lambda script sends them to Caesars HTTP API).

    For each classification, it runs zero or more extractors defined in the workflow to generate "extracts". These extracts specify information summarized out of the full classification.

    Whenever extracts change, Caesar will then run zero or more reducers defined in the workflow. Each reducer receives all the extracts, merged into one hash per classification. The task of the reducer is to aggregate results from multiple classifications into key-value pairs, where values are simple data types: integers or booleans. The output of each reducer is stored in the database as a Reduction.

    Whenever a reduction changes, Caesar will then run zero or more rules defined in the workflow. Each rule is a boolean statement that can look at values produced by reducers (by key), compare. Rules support logic clauses like and / or / not. When the rule evaluates to true, all of the effects associated with that rule will be performed. For instance, an effect might be to retire a subject.

    ┏━━━━━━━━━━━━━━━━━━┓
    ┃     Kinesis      ┃
    ┗━━━┳━━━━━━━━━━━━━━┛
        │                                                       ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐
        │                                                         EXTRACTS:
        │   ┌ ─ ─ ─ ─ ─ ─ ─ ─ ┐         ┌──────────────────┐    │                           │
        ├──▶ Classification 1  ────┬───▶│ FlaggedExtractor │──────▶{flagged: true}
        │   └ ─ ─ ─ ─ ─ ─ ─ ─ ┘    │    └──────────────────┘    │                           │
        │                          │    ┌──────────────────┐
        │                          └───▶│ SurveyExtractor  │────┼─▶{raccoon: 1}             │
        │                               └──────────────────┘
        │   ┌ ─ ─ ─ ─ ─ ─ ─ ─ ┐         ┌──────────────────┐    │                           │
        └──▶ Classification 2  ────┬───▶│ FlaggedExtractor │──────▶{flagged: false}
            └ ─ ─ ─ ─ ─ ─ ─ ─ ┘    │    └──────────────────┘    │                           │
                                   │    ┌──────────────────┐
                                   └───▶│ SurveyExtractor  │────┼─▶{beaver: 1, raccoon: 1}  │
                                        └──────────────────┘
       ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐                          └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘
         REDUCTIONS:                                                          │
       │                             │                                        │
          {                                                                   │
       │    votes_flagged: 1,        │  ┌──────────────────┐                  │
            votes_beaver: 1,      ◀─────│ VoteCountReducer │◀─────────────────┘
       │    votes_raccoon: 2         │  └──────────────────┘
          }
       │                             │
                                                                                  ┏━━━━━━━━━━━━━━━━┓
       │  {                          │  ┌──────────────────┐                      ┃Some script run ┃
            swap_confidence: 0.23 ◀─────│ ExternalReducer  │◀────HTTP API call────┃by project owner┃
       │  }                          │  └──────────────────┘                      ┃  (externally)  ┃
                                                                                  ┗━━━━━━━━━━━━━━━━┛
       └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘
                      │
                      │
                      │                 ┌──────────────────┐         POST         ┏━━━━━━━━━━━━━━━━┓
                      └────────────────▶│       Rule       │───/subjects/retire──▶┃    Panoptes    ┃
                                        └──────────────────┘                      ┗━━━━━━━━━━━━━━━━┛
    

    To make this more concrete, an example would be a survey-task workflow where:

    Reducers can reduce across multiple subjects' extracts if the following is included in the new subject's metadata (when uploaded to Panoptes): { previous_subject_ids: [1234] }. Extracts whose subject ids match an id in that array will be included in reductions for the new subject.

    Usage

    Caesar listens to classification events for workflows from the event stream. You need to let Caesar know to listen to this workflow and it has to be a workflow you have access to in zooniverse.org.

    There are two ways to configure Caesar, manually via the UI or programmatically the API. E.g. with a Zooniverse.org workflow id = 1234

    Configuring Caesar via the Web UI

    Configuring Caesar via the API

    1. Create a workflow
      • POST JSON request to https://caesar.zooniverse.org/workflows/?id=1234
    2. Update the workflow with a configuration object payload (see rules & effects)
      • PUT JSON request to /workflows/1234 with payload { "extractors_config": { }, "reducers_config": { } "rules_config": [ { "if": ["gte", ["lookup", "survey-total-VHCL"], ["const", 1]], "then": [{"action": "retire_subject"}] } ] }

    Authentication

    To authenticate, use an OAuth bearer token obtained from Panoptes:

    # With shell, you can just pass the correct header with each request
    curl "api_endpoint_here"
      -H "Authorization: Bearer xyz"
    

    Caesar uses the same OAuth bearer token as Panoptes to allow access to the API. By default any data relating to a workflow is only accessible to project owners and collaborators.

    Caesar expects for the bearer token to be included in all API requests to the server in a header that looks like the following:

    Authorization: Bearer xyz

    Extracts

    Get extracts

    GET /workflows/$WORKFLOW_ID/extractors/$EXTRACTOR_KEY/extracts?subject_id=$SUBJECT_ID HTTP/1.1
    Content-Type: application/json
    Accept: application/json
    Authorization: Bearer $TOKEN
    

    The above command returns JSON structured like this:

    [
        {
            "classification_at": "2017-05-16T15:51:13.544Z",
            "classification_id": 54376560,
            "created_at": "2017-05-16T20:37:39.124Z",
            "data": null,
            "extractor_key": "c",
            "id": 411083,
            "subject_id": 458033,
            "updated_at": "2017-05-16T20:37:39.124Z",
            "user_id": 108,
            "workflow_id": 4084
        }
    ]
    

    Extracts are pieces of information relating to a specific classification (and therefore to a specific subject as well).

    Query Parameters

    Parameter Default Description
    WORKFLOW_ID null Required · Specifies which workflow
    SUBJECT_ID null Required · Specifies which subject
    EXTRACTOR_KEY null Required · Specifies which extractor to fetch extracts from.

    Create & update extracts

    Inserting and updating extracts happens through one and the same API endpoint, which performs an "upsert".

    POST /workflows/$WORKFLOW_ID/extractors/$EXTRACTOR_KEY/extracts HTTP/1.1
    Content-Type: application/json
    Accept: application/json
    Authorization: Bearer $TOKEN
    
    {
        "subject_id": 458033,
        "classification_at": "2017-05-16T15:51:13.544Z",
        "classification_id": 54376560,
        "user_id": 108,
        "data": {"PENGUIN": 1, "POLARBEAR": 4}
    }
    

    Body fields

    The request body should be encoded as a JSON with the following fields:

    Parameter Default Description
    subject_id null Required · Specifies which subject this extract is about
    classification_id null Required · Specifies which classification this extract is about.
    May be omitted if known to be an update rather than a create.
    classification_at null Required · Specifies what time the classification happened. This is used to sort extracts by classification time when reducing them.
    May be omitted if known to be an update rather than a create.
    user_id null User that made the classification. null signifies anonymous.

    Reducer Configuration

    Configuring reducers can be tricky because they are flexible in so many different ways.

    Extractor Keys

    Sometimes multiple extractors will be defined but a particular reducer only cares about or can only work with a particular type of extract. In this case, you can use the extractor keys property to restrict the extracts that are sent to this reducer. The format of this value is either a string (for a single extractor key) or an array of strings (for multiple extractors). The default, a blank string or a nil, sends all extracts.

    Topic

    Extracts are always implicitly grouped before being combined. There are two different ways of doing this, whose names are hopefully self-explanatory. The default is reduce_by_subject

    Grouping

    This is a confusing setting because extracts are already obviously grouped according to the topic. This allows an additional grouping pass, which, crucially, can be done on the basis of the value of a specified field. So to configure this, you need to set the name of the field to group by (in format extractor_key.field_name) and then a flag indicating how to handle when the extracts for a given classification are missing that field. The value of the grouping field will be reflected in the name of the group, stored in the subgroup field. The default behavior is not to perform this secondary grouping.

    Reduction Mode

    This is probably the least understood part of configuring reducers. Briefly, the system offers two very different modes of performing reduction. These are:

    Default Reduction

    In "default reduction" mode, each time a new extract is created, we fetch all of the other extracts for that subject (or user) and send them all to the reducer for processing. In cases where extracts are coming in very quickly, this can create some extra work fetching extracts, but is guaranteed to be free of race conditions because each new reduction will get a chance to reduce across all relevant extracts. This mode is much simpler and is preferred in almost every case. However, in the case where a given subject (or user) is likely to have thousands of associated extracts, it is recommended to use "running reduction" mode.

    Running Reduction

    "Running reduction" mode was created to support the Notes for Nature use case, where we are reducing across a user's entire classification history within a given project, which could run to tens of thousands of items for power users. In this use case, fetching all 10,000 extracts each time a new extract is created is impractical and the operations we want to perform are relatively simple to perform using only the new extracts created in a given extraction pass.

    When a reducer is configured for running reduction, each time a new classification produces new extracts, the reducer is invoked with only those new extracts. Any additional information it would need in order to correctly compute the reduction should be present in a field on the reduction, called a store. With the new extracts and the store, the reducer will compute an updated value and update its store appropriately. However, this can't be done in a multithreaded way or else the object might be available while in an inconsistent state (example: its store has been updated but its value has not). Accordingly, we use optimistic locking semantics, so that we prefetch all possible relevant extracts and reductions before reducing and throw a sync error if the object versions don't match when we try to save. Further, we need to avoid updating the reduction multiple times with the same extract, which is not a concern with running reduction. Therefore, this mode populates a relation tracking which extracts have been incorporated into which reductions. Between this and the synchronization retries, there is considerable added complexity and overhead compared to default reduction mode. It's not recommended to use running reduction mode with external reducers, because the added complexity of writing reducers that reduce from a store.

    Reduction Mode Example

    See Reduction Mode Example

    Reduction Mode Examples

    This example is to clarify the difference between how default reduction and running reduction work. Imagine the extract from each classification produces a number from 0 to 10 and the reducer computes the average of these numbers.

    The same extracts are processed by each reducer in the same order and we illustrate the changing values in the system as they arrive. For clarity, the values of extracts are indicated in bold.

    Default Reduction

    Extract ID Extract Value Extracts to reducer Store Value In Calculation Store Value Items in Association
    1 5 1 nil 5/1 nil 0
    2 3 1, 2 nil (5+3)/2 nil 0
    2 3 1, 2 nil (5+3)/2 nil 0
    3 4 1, 2, 3 nil (5+3+4)/3 nil 0

    Running Reduction

    Extract ID Extract Value Extracts to reducer Store Value In Calculation Store Value Items in Association
    1 5 1 nil (0*0+5)/(0+1) 1 1
    2 3 2 1 (5*1+3)/(1+1) 2 2
    2 3 nil N/A N/A 2 2
    3 4 3 2 (4*2+4)/(2+1) 3 3

    Points of Note

    Note that in default reduction mode, re-reduction is always triggered, regardless of whether an extract is being processed twice. Also notice that each computation in default reduction consumes all of the extracts. We calculate an average by summing together the values of all of the extracts and then dividing by the number of extracts.

    In running reduction, on the other hand, the store keeps a running count of how many items the reducer has seen. This store, with the previous value of the reduction, can be used to compute the new average using only the new value by using the formula ((old average * previous count) + new value)/(old count + 1) and the store can be updated with the new count (old count + 1).

    When using running reducers for performance reasons, please keep in mind that the performance benefits of running reduction are only realized if every reducer for that reducible is executed in running mode. The primary advantage of running reduction is that it eliminates the need to load large numbers of extracts for a given subject or user.

    Subject Metadata

    Caesar can reflect on several attributes in a subject's metadata to know how to perform certain actions.

    #training_subject:

    #previous_subject_ids:

    Rules

    A workflow can configure one or many rules. Each rule has a condition and one or more effects that happen when that condition evaluates to true. Conditions can be nested to achieve complicated if statements.

    Rules may pertain to either subjects or users. Rules have an evaluation order that can be set in the database if need be, and then rules can either be all evaluated or evaluated until the first true condition is reached.

    Conditions

    The condition is a single operation, but some types of operations can be nested. The general syntax is like if you'd write Lisp in JSON. It's always an array with as the first item a string identifying the operator. The other values are operations in themselves: [operator, arg1, arg2, ...].

    Sample conditions

    If one or more vehicles is detected

    From the console: ruby SubjectRule.new workflow_id: 123, condition: ['gte', ['lookup', 'survey-total-VHCL'], ['const', 1]], row_order: 1

    Input into UI: json ["gte", ["lookup", "survey-total-VHCL"], ["const", 1]]

    If the most likely identification is "HUMAN"

    From the console: ruby SubjectRule.new workflow_id: 123, condition: ['gte', ['lookup', 'consensus.most_likely', ''], ['const', 'HUMAN']], row_order: 3 Input into UI: json ["gte", ["lookup", "consensus.most_likely", ""], ["const", "HUMAN"]]

    Effects

    Each rule can have one or more effects associated with it. Those effects will be performed when that rule's condition evaluates to true. Subject Rules have effects that affect subjects (and implicitly receive subject_id as a parameter) and User Rules have effects that affect users (user_id).

    Subject Rule Effects

    effect_type config Parameters Effect Code
    retire_subject reason (string)* Effects::RetireSubject
    add_subject_to_set subject_set_id (string) Effects::AddSubjectToSet
    add_subject_to_collection collection_id (string) Effects::AddSubjectToCollection
    external_effect url (string)** Effects::ExternalEffect

    * Panoptes API validates reason against a list of permitted values. Choose from blank, consensus, or other

    ** url must be HTTPS

    Subject Rule Effects

    effect_type config Parameters Effect Code
    promote_user workflow_id (string) Effects::ExternalEffect

    Sample Effects

    Retire a subject

    From the console:

    SubjectRuleEffect.new
      rule_id: 123,
      effect_type: 'retire_subject',
      config: { reason: 'consensus' }
    

    In the UI:

    These can be configured in the UI normally, there's nothing complicated like the condition field.

    Promote a user to a new workflow

    From the console: ruby UserRuleEffect.new rule_id: 234, effect_type: 'promote_user', config: { 'workflow_id': '555' }

    How to do SWAP

    In Panoptes, set workflow.configuration to something like:

    {"subject_set_chances": {"EXPERT_SET_ID": 0}}
    

    In Caesar, set the workflow like so:

    {
      "extractors_config": {
        "who": {"type": "who"},
        "swap": {"type": "external", "url": "https://darryls-server.com"} # OPTIONAL
      },
      "reducers_config": {
        "swap": {"type": "external"},
        "count": {"type": "count"}
      }
      "rules_config": [
        {"if": [RULES], "then": [{"action": "retire_subject"}]}
      ]
    }
    

    When you detect an expert user, update their probabilities like this:

    POST /api/project_preferences/update_settings?project_id=PROJECT_ID&user_id=USER_ID HTTP/1.1
    Host: panoptes-staging.zooniverse.org
    Authorization: Bearer TOKEN
    Content-Type: application/json
    Accept: application/vnd.api+json; version=1
    
    {
      "project_preferences": {
        "designator": {
          "subject_set_chances": {
            "WORKFLOW_ID": {"SUBJECT_SET_ID": 0.5}
          }
        }
      }
    }
    

    And store expert-seenness in Caesar so that you can use it in the rulse

    POST /workflows/WORKFLOW_ID/reducers/REDUCER_KEY/reductions HTTP/1.1
    Host: caesar-staging.zooniverse.org
    Authorization: Bearer TOKEN
    Content-Type: application/json
    Accept: application/json
    
    {
      "likelyhood": 0.864,
      "seen_by_expert": false
    }
    

    This document is a reference to the current state of affairs on doing SWAP on the Panoptes platform (by which we mean the Panoptes API, Caesar, and Designator).

    To do SWAP, one must:

    1. Track the confusion matrix of users. We currently expect this to be done by some entity outside the Panoptes platform. This could be a script that runs periodically on someone's laptop, or it can be an external webservice that gets classifications streamed to it in real-time by Caesar (this is what Darryl is doing). We don't currently have a good place to store the confusion matrix itself inside the Panoptes platform. But, if the matrix identifies an expert classifier, post that into Panoptes under the project_preferences resource (API calls explained in later section)

    2. Calculate the likelyhood of subjects. This is done in the same place that also calculates the confusion matrices. The resulting likelyhood should be posted into Caesar as a reduction.

    3. Retire subjects when we know the answer. By posting the likelyhood into Caesar, we can set rules on it. For instance:

      • IF likelyhood < 0.1 AND classifications_count > 5 THEN retire()
      • IF likelyhood > 0.9 AND classifications_count > 5 THEN retire()
      • IF likelyhood > 0.1 AND likelyhood < 0.9 AND not seen_by_expert AND classifications > 10 THEN move to expert_set
    4. When Caesar moves subjects into an expert-only subject set, Designator can then serve subjects from that set only to users marked as experts by the project_preferences. Designator is all about serving subjects from sets with specific chances, which means that we avoid the situation where experts only ever see the really hard subjects by mixing e.g. 50% hard images with 50% "general population".

    Errors

    The Kittn API uses the following error codes:

    Error Code Meaning
    400 Bad Request -- Your request sucks.
    401 Unauthorized -- Your API key is wrong.
    403 Forbidden -- The kitten requested is hidden for administrators only.
    404 Not Found -- The specified kitten could not be found.
    405 Method Not Allowed -- You tried to access a kitten with an invalid method.
    406 Not Acceptable -- You requested a format that isn't json.
    410 Gone -- The kitten requested has been removed from our servers.
    418 I'm a teapot.
    429 Too Many Requests -- You're requesting too many kittens! Slow down!
    500 Internal Server Error -- We had a problem with our server. Try again later.
    503 Service Unavailable -- We're temporarily offline for maintenance. Please try again later.