Scenario 2: Attendance Tracking

The attendance tracking scenario was originally planned as a classroom attendance list for students enrolled in certain courses at the Institute of Networks and Security. A trial was planned where students could opt in to participate in the digital attendance list.

However, due to the pandemic and all courses being held online in 2020 and 2021, we decided to adapt the setup. The adapted scenario tracks presence of employees in certain shared spaces of the institute to assist in contact tracing. For a first implementation, we chose the shared printer room.

Other than the target audience, the originally intended structure of scenario 2 remains unmodified: A sensor (camera + image processing and evaluation) observes the entrance of the printer room and recognizes persons upon entering the room. The sensor then processes the biometric information and compares it against the set of registered persons. If there is a match, the corresponding PIA (of the identified person) gets informed which will then connect to the verifier which creates an entry in the attendance list (consisting of an ID and a timestamp). As opposed to scenario 1, the sensor is a completely separated entity that is not under control of the individual, bringing us further towards the overall project goal by having three independent entities (PIA, sensor and verifier).


In this scenario, the sensor consists of a Jetson Nano (small single-board computer) and a camera. This sensor performs face detection and recognition to identify individuals. The sensor performs two main tasks:

  • Receiving registration from a PIA: In the future, a sensor directory will be used as a bridge to allow PIAs to discover and assess sensors for expected authentication transactions (based on predictions by the PIA’s location model). However, in this scenario, the PIA has the information about the single available sensor statically configured. In order to participate in the attendance list scenario, the PIA registers with the sensor (cf. Registration to sensor below). This request includes an authenticated face biometric template to recognize the individual and information on how to contact the PIA in case of identification of the corresponding individual. As a face biometric template we use an embedding extracted from a deep neural network instead of facial images. Thus, embeddings for the registration process are currently subject of ongoing research where we want to protect the information from such an embedding by creating fuzzy hashes to prevent passive data collection across multiple sensors. In the future we plan to encrypt this information to increase privacy for the individual.
  • Sending notification to PIA: When the sensor detects a person, the corresponding embedding is calculated. Next, this embedding is compared to all registered embeddings. If the distance measure between the detected embedding and a registered embedding is smaller than the predefined identification threshold, a credential attesting presence at the sensor is sent to the corresponding PIA (using the registered callback information). Our implementation is based on verifiable credentials and the message that is sent to the PIA actually consists of a verifiable presentation consisting of a credential with a room ID and detection timestamp and a second credential holding the discovered embedding. The latter allows the PIA to also verify the sensor measurement result.


Each individual has its own personal identity agent (PIA) as its representation in the digital world. The PIA runs through multiple stages in this scenario:

  • Setup: As opposed to scenario 1, where each individual carries their own PIA as an app on their smartphone, scenario 2 relies on PIAs hosted on a server at the INS. This server is a Debian VM on the virtualization platform of the institute. Each PIA instance runs as a separate process inside the VM. The PIA process is encapsulated in a systemd service which also takes care of assigning a unique port number for the PIA web interface and a unique enrollment PIN to the instance.
  • Enrollment: When a new participating individual (employee) wants to enroll their PIA instance, the PIA systemd unit is spun up and the individual gets the unique URL for their PIA consisting of the address of the VM and the PIA port number, and the enrollment PIN. The individual can then pair with their PIA. After loading the enrollment page, the individual is asked for their ID number, the PIN code, and a portrait image to complete the pairing process. As to this point we do not have an issuing authority that attests credentials, the individual provides their own biometric template (in the form of a portrait picture that the PIA uses to derive an embedding) and their own ID number. The PIA creates a corresponding self-attested credential for that information. Depending on the chosen options (opt-in for participation in future testing of new biometric matching algorithms), the PIA may either persistently store the raw image data or only the raw image once to create the embedding. The reason to allow this opt-in for storing the raw portrait image is that this allows automatic recreation of embeddings when algorithms/neural networks face recognition are changed in future versions of the prototype without requiring the user to explicitly upload a new picture. Once the embedding is successfully created, the individual is now owner of the PIA instance and has a credential for authenticating to the attendance list.
  • Registration to sensor: After the enrollment process, the PIA automatically registers to all available sensors. As scenario 2 does not yet include a sensor directory service, the PIA obtains the static information about available sensors from a configuration file. Depending on the sensor and the available credentials, the PIA registers to all sensors which fit to the available credentials. In the current implementation, we only have one credential which is designed to be used with a single face recognition sensor. However, with our planned face recognition field trial, further face recognition sensors will be added to this system. For the face recognition sensor, the embedding and a callback URL is registered with the sensor so that the sensor can send a push message back to the PIA if the corresponding individual is detected.
  • Callback from sensor: Anytime the sensor finds a match for the provided embedding, a message is sent to the PIA (cf. Sending notification to PIA above). The message is validated by checking the verifiable presentation provided by the sensor. The credential received from the sensor is then used to build a verifiable presentation for authenticating with the attendance list verifier.
  • Authenticate to verifier: The last step for the PIA is to connect to the verifier and to send the verifiable presentation to complete the Digidow transaction. The verifiable presentation consists of the sensor credential authenticating room ID and detection timestamp and the PIA credential authenticating the person ID number. Similar to the list of sensors, the verifier endpoint is currently statically configured.


The main purpose of the verifier in this scenario is to verify the received credentials and assemble an attendance list for the referenced room(s). To make this possible, the verifier has an HTTP REST interface for accepting requests from the PIAs. On receiving such a request, the verifier checks if the message contains valid credentials needed to create a new entry on the list. If that is the case, a new entry gets stored.