Publications
2023
Abstract
This bachelor thesis aims to extend the Personal Identity Agent of the Digidow project by adding two new authentication methods with FIDO2 tokens. So far, users had to use a password for the authentication process. A method for authenticating with FIDO2 tokens has not been implemented yet. Therefore, the authentication process was enhanced by implementing the authentication with security keys. Initially, two-factor authentication with security keys as second factor was implemented. In addition to that, the application now fulfills the requirement of passwordless authentication. First, this bachelor thesis describes the theoretical background of FIDO2 token authentication. Second, it gives a detailed overview of the functionality of FIDO2 token authentication. Additionally, the design choices for the implementation and the individual implementation steps are outlined. Furthermore, an evaluation concerning the Tor Browser, the WebAuthn standard, the security key setup, and implementation options is done.
Abstract
Biometrische Daten gehören zu den datenschutzrechtlich besonders sensiblen Daten. Immer mehr Systeme verwerten diese Daten. Da diese (zumindest technisch) ihre Existenz nicht offenlegen müssen, kann es keine vollständige Liste von Systemen geben, welche die eigenen persönlichen Daten verarbeiten. Zumindest jene Systeme, welche reale Konsequenzen verursachen, sind der Öffentlichkeit jedoch bekannt. Welche Möglichkeiten gibt es, sich vor den Risiken solcher Systeme zu verteidigen?
Abstract
Anonymous credentials (ACs) systems are a powerful cryptographic tool for privacy-preserving applications and provide strong user privacy guarantees for authentication and access control. ACs allow users to prove possession of attributes encoded in a credential without revealing any information beyond them. A delegatable AC (DAC) system is an enhanced AC system that allows the owners of credentials to delegate the obtained credential to other users. This allows to model hierarchies as usually encountered within public-key infrastructures (PKIs). DACs also provide stronger privacy guarantees than traditional AC systems since the identities of issuers and delegators can also be hidden. In this paper we present a novel DAC scheme that supports attributes, provides anonymity for delegations, allows the delegators to restrict further delegations, and also comes with an efficient construction. Our approach builds on a new primitive that we call structure-preserving signatures on equivalence classes on updatable commitments (SPSEQ-UC). The high-level idea is to use a special signature scheme that can sign vectors of set commitments, where signatures can be extended by additional set commitments. Signatures additionally include a user’s public key, which can be switched. This allows us to efficiently realize delegation in the DAC. Similar to conventional SPSEQ, the signatures and messages can be publicly randomized and thus allow unlinkable delegation and showings in the DAC system. We present further optimizations such as cross-set commitment aggregation that, in combination, enable efficient selective showing of attributes in the DAC without using costly zero-knowledge proofs. We present an efficient instantiation that is proven to be secure in the generic group model and finally demonstrate the practical efficiency of our DAC by presenting performance benchmarks based on an implementation.
Abstract
Anforderungen an Datenschutz und Informationssicherheit, aber auch an Datenaktualität und Vereinfachung bewirken einen kontinuierlichen Trend hin zu plattformübergreifenden ID-Systemen für die digitale Welt. Das sind typischerweise föderierte Single-Sign-On-Lösungen großer internationaler Konzerne wie Apple, Facebook und Google. Dieser Beitrag beleuchtet die Frage, wie ein dezentrales, offenes, globales Ökosystem nach dem Vorbild des Single-Sign-On für die digitale, biometrische Identifikation in der physischen Welt aussehen könnte. Im Vordergrund steht dabei die implizite Interaktion mit vorhandener Sensorik, mit der Vision, dass Individuen in der Zukunft weder Plastikkarten noch mobile Ausweise am Smartphone mit sich führen müssen, sondern ihre Berechtigung für die Nutzung von Diensten rein anhand ihrer biometrischen Merkmale nachweisen können. Während diese Vision bereits jetzt problemlos durch Systeme mit einer zentralisierten Datenbank mit umfangreichen biometrischen Daten aller Bürger*innen möglich ist, wäre ein Ansatz mit selbstverwalteten, dezentralen digitalen Identitäten erstrebenswert, bei dem die Nutzer*in in den Mittelpunkt der Kontrolle über ihre eigene digitale Identität gestellt wird und die eigene digitale Identität an beliebigen Orten hosten kann. Anhand einer Analyse des Zielkonflikts zwischen umfangreichem Privatsphäreschutz und Praktikabilität, und eines Vergleichs der Abwägung dieser Ziele mit bestehenden Ansätzen für digitale Identitäten wird ein Konzept für ein dezentrales, offenes, globales Ökosystem zur privaten, digitalen Authentifizierung in der physischen Welt abgeleitet.
Abstract (English)
Requirements on data privacy and information security, as well as data quality and simplification, cause a continuous trend towards federated identity systems for the digital world. These are often the single sign-on platforms offered by large international companies like Apple, Facebook and Google. This article evaluates how a decentralized, open, and global ecosystem for digital biometric identification in the physical world could be designed based on the model of federated single sign-on. The main idea behind such a concept is implicit interaction with existing sensors, in order to get rid of plastic cards and smartphone-based mobile IDs in a far future. Instead, individuals should be capable of proving their permissions to use a service solely based on their biometrics. While this vision is already proven feasible using centralized databases collecting biometrics of the whole population, an approach based on self-sovereign, decentralized digital identities would be favorable. In the ideal case, users of such a system would retain full control over their own digital identity and would be able to host their own digital identity wherever they prefer. Based on an analysis of the trade-off between privacy and practicability, and a comparison of this trade-off with observable design choices in existing digital ID approaches, we derive a concept for a decentralized, open, and global-scale ecosystem for private digital authentication in the physical world.
Abstract
Biometrics are one of the most privacy-sensitive data. Ubiquitous authentication systems with a focus on privacy favor decentralized approaches as they reduce potential attack vectors, both on a technical and organizational level. The gold standard is to let the user be in control of where their own data is stored, which consequently leads to a high variety of devices used. Moreover, in comparison with a centralized system, designs with higher end-user freedom often incur additional network overhead. Therefore, when using face recognition for biometric authentication, an efficient way to compare faces is important in practical deployments, because it reduces both network and hardware requirements that are essential to encourage device diversity. This paper proposes an efficient way to aggregate embeddings used for face recognition based on an extensive analysis on different datasets and the use of different aggregation strategies. As part of this analysis, a new dataset has been collected, which is available for research purposes. Our proposed method supports the construction of massively scalable, decentralized face recognition systems with a focus on both privacy and long-term usability.
Event
Abstract
Ubiquitous authentication systems with a focus on privacy favor decentralized approaches as they reduce potential attack vectors, both on a technical and organizational level. The gold standard is to let the user be in control of where their own data is stored, which consequently leads to a high variety of devices used what in turn often incurs additional network overhead. Therefore, when using face recognition, an efficient way to compare faces is important in practical deployments. This paper proposes an efficient way to aggregate embeddings used for face recognition based on an extensive analysis on different datasets and the use of different aggregation strategies. As part of this analysis, a new dataset has been collected, which is available for research purposes. Our proposed method supports the construction of massively scalable, decentralized face recognition systems with a focus on both privacy and long-term usability.
2022
Abstract
Biometrics are one of the most privacy-sensitive data. Ubiquitous authentication systems with a focus on privacy favor decentralized approaches as they reduce potential attack vectors, both on a technical and organizational level. The gold standard is to let the user be in control of where their own data is stored, which consequently leads to a high variety of devices used. Moreover, in comparison with a centralized system, designs with higher end-user freedom often incur additional network overhead. Therefore, when using face recognition for biometric authentication, an efficient way to compare faces is important in practical deployments, because it reduces both network and hardware requirements that are essential to encourage device diversity. This paper proposes an efficient way to aggregate embeddings used for face recognition based on an extensive analysis on different datasets and the use of different aggregation strategies. As part of this analysis, a new dataset has been collected, which is available for research purposes. Our proposed method supports the construction of massively scalable, decentralized face recognition systems with a focus on both privacy and long-term usability.
Event
Abstract
Signatures are annoying when you are trying build software reproducibly, especially when deeply embedded in the output artifact. Let’s look at how we can tackle this problem elegantly with Nix.
Reproducibility means that someone else can independently recreate exactly the same binary artifact.
This is very useful for confidently knowing what source code a binary artifact was built from.
People are analyzing artifacts with tools like diffoscope
to exactly locate differences between two artifacts built using the same build instructions.
For complex projects even when looking at an exact difference in the output that way, it is not always easy to find the cause of that difference.
In general using Nix to split the build instructions into smaller steps can help us make this process easier, because we can notice differences at the end of the intermediary step that introduced them, as long as we are nix build --rebuild
ing the right build steps.
Even then signatures are still a problem, because we can never really reproduce a signed artifact without access to the signing key and even with access to the key not all popular signing schemes produce signatures deterministically. We either have to substitute in the expected signatures or keep track of those expected differences.
There is a nice pattern that we can use for always substituting the correct signatures with Nix, which makes it easy to verify embedded signatures as part of such an independent recreation process even for a large and complicated artifact. The same pattern also takes advantage of Nix’s binary caches to automatically obtain all the required signatures, which are ideally the only thing we cannot reproduce.
Abstract
Distributed systems are widely considered more privacy friendly than centralized systems because there is no central authority with access to all of the information. However, this does not consider the importance of network privacy. If users establish peer-to-peer connections to each other, adversaries monitoring the network can easily find out who is communicating with whom, at which times, and for how long, even if the communication is end-to-end encrypted. For digital identity systems this is especially critical, because knowledge about when and where an individual uses their digital identity is equivalent with knowing what the individual is doing.
The research presented in this thesis strives to design a distributed digital identity system that remains resilient against passive adversaries by instrumenting the anonymity network Tor. Significant efforts were dedicated to analyze how suited the Tor network is for supporting such distributed systems by measuring the usage of onion services and the time needed to start a new onion service. While this analysis did not detect any privacy issues within the current Tor network, it revealed several shortcomings in regard to the network latency of Tor onion services, which are addressed in the final parts of this thesis. Several modifications are proposed that are shown to significantly reduce the waiting times experienced by users of privacy preserving distributed digital identity systems.
Abstract
A Personal Identity Agent (PIA) is a digital representative of an individual and enables their authentication in the physical world with biometrics. Crucially, this authentication process maximizes privacy of the individual via data minimization. The PIA is an essential component in a larger research project, namely the Christian Doppler Laboratory for Private Digital Authentication in the Physical World (Digidow). While the project is concerned with the overall decentralized identity system, spanning several entities (e.g. PIA, sensor, verifier, issuing authority) and their interactions meant to establish trust between them, this work specifically aims to design and implement a PIA for Android. The latter entails three focus areas: First, an extensive analysis of secret storage on Android for securely persisting digital identities and/or their sensitive key material. Specifically, we are looking at the compatibility with modern cryptographic primitives and algorithms (group signatures and zero knowledge proofs) to facilitate data minimization. Second, we reuse existing Rust code from a different PIA variant. Thereby we analyze and adopt a solution for language interoperability between the safer systems programming language Rust and the JVM. And third, we strengthen the trust in our Android PIA implementation by evaluating the reproducibility of the build process. As part of the last focus area we uncovered and fixed a non-determinism in a large Rust library and subsequently achieved the desired reproducibility of the Android PIA variant.
Event
Abstract
Gerade in Zeiten einer Pandemie, in der das Tragen von Masken zeitweise verpflichtend wurde, stellt sich die Frage, welchen Einfluss das Verhüllen verschiedener Gesichtsteile auf moderne Gesichtserkennung hat. Gibt es Teile des Gesichts die von besonderer Bedeutung für die Erkennung sind? Dies evaluieren wir objektiv anhand eines häufig verwendeten Datensatzes und drei verschiedenen, modernen Gesichtserkennungssysteme. Dabei entdeckten wir ein Verhalten eines state-of-the-art Gesichtserkennungsalgorithmus, welcher reproduzierbar Personen in gewissen spezifischen Situationen nicht erkennt. Außerdem leben wir in einer Welt, in der immer mehr Daten generiert werden. So ist auch die Chance hoch, dass Gesichtserkennungssysteme mehrere Bilder derselben Person besitzen. Mehrere, verschiedene Bilder bedeuten im Vergleich zu einem einzigen Bild sehr wahrscheinlich zusätzliche Informationen. Wie können Bilder kombiniert werden, um diese Zusatzinformationen zu verwenden ohne dabei einen erheblich größeren (Zeit-)aufwand zu generieren?
Abstract
Anonymous credentials systems (ACs) are a powerful cryptographic tool for privacy-preserving applications and provide strong user privacy guarantees for authentication and access control. ACs allow users to prove possession of attributes encoded in a credential without revealing any information beyond them. A delegatable AC (DAC) system is an enhanced AC system that allows the owners of credentials to delegate the obtained credential to other users. This allows to model hierarchies as usually encountered within public-key infrastructures (PKIs). DACs also provide stronger privacy guarantees than traditional AC systems since the identities of issuers and delegators are also hidden. A credential issuer’s identity may convey information about a user’s identity even when all other information about the user is protected.
We present a novel delegatable anonymous credential scheme that supports attributes, provides anonymity for delegations, allows the delegators to restrict further delegations, and also comes with an efficient construction. In particular, our DAC credentials do not grow with delegations, i.e., are of constant size. Our approach builds on a new primitive that we call structure-preserving signatures on equivalence classes on updatable commitments (SPSEQ-UC). The high-level idea is to use a special signature scheme that can sign vectors of set commitments which can be extended by additional set commitments. Signatures additionally include a user’s public key, which can be switched. This allows us to efficiently realize delegation in the DAC. Similar to conventional SPSEQ signatures, the signatures and messages can be publicly randomized and thus allow unlinkable showings in the DAC system. We present further optimizations such as cross-set commitment aggregation that, in combination, enable selective, efficient showings in the DAC without using costly zero-knowledge proofs. We present an efficient instantiation that is proven to be secure in the generic group model and finally demonstrate the practical efficiency of our DAC by presenting performance benchmarks based on an implementation.
Event
Abstract
This work proposes a modular automation toolchain to analyze current state and over-time changes of reproducibility of build artifacts derived from the Android Open Source Project (AOSP). While perfect bit-by-bit equality of binary artifacts would be a desirable goal to permit independent verification if binary build artifacts really are the result of building a specific state of source code, this form of reproducibility is often not (yet) achievable in practice. Certain complexities in the Android ecosystem make assessment of production firmware images particularly difficult. To overcome this, we introduce “accountable builds” as a form of reproducibility that allows for legitimate deviations from 100 percent bit-by-bit equality. Using our framework that builds AOSP in its native build system, automatically compares artifacts, and computes difference scores, we perform a detailed analysis of differences, identify typical accountable changes, and analyze current major issues leading to non-reproducibility and non-accountability. We find that pure AOSP itself builds mostly reproducible and that Project Treble helped through its separation of concerns. However, we also discover that Google’s published firmware images deviate from the claimed codebase (partially due to side-effects of Project Mainline).
Abstract
Web technologies have evolved rapidly in the last couple of years and applications have gotten significantly bigger. Common patterns and tasks have been extracted into numerous frameworks and libraries, and especially JavaScript frameworks seem to be recreated daily. This poses a challenge to many developers who have to choose between the frameworks, as a wrong decision can negatively influence the path of a project.
In this thesis, the three most popular front-end frameworks Angular, React and Vue are compared by extracting relevant criteria from the literature and evaluating the frameworks against these criteria. Angular is then used to develop a web application for displaying data from the Android Device Security Rating.
Abstract
Smartphones generate an abundance of network traffic while active and during software updates. With such a high amount of data it is hard for humans to comprehend the processes behind the traffic and find points of interest that could compromise the device security. To solve this problem, this thesis proposes a system to automatically monitor the traffic of Android clients, store it in a database and perform a first analysis of the network data. For the capturing and monitoring tasks, we decided to use the full packet capture system Arkime and expand its functionality with a custom tool built in the course of this thesis. To be able to gain relevant insights, the system monitors the traffic over a long time frame, which prevents false data caused by holes in the data stream or one time events. All Android devices are separated from each other by assigning each device to a separate VLAN. For each session the system produces custom tags, low level statistical data and high level classification data. Further, the system provides a solution to apply custom rules in which data from sessions can be freely accessed and modified. Additionally, tags can be set with a matching of host names against custom regular expressions or update information stored in the database. The system uses only the captured data so that changes that can occur later on like the DNS resolution don’t affect the accuracy of the outcome.
Abstract
Digital identity documents provide several key benefits over physical ones. They can be created more easily, incur less costs, improve usability and can be updated if necessary. However, the deployment of digital identity systems does come with several challenges regarding both security and privacy of personal information. In this paper, we highlight one challenge that digital identity system face if they are set up in a distributed fashion: Network Unlinkability. We discuss why network unlinkability is so critical for a distributed digital identity system that wants to protect the privacy of its users and present a specific definition of unlinkability for our use-case. Based on this definition, we propose a scheme that utilizes the Tor network to achieve the required level of unlinkability by dynamically creating onion services and evaluate the feasibility of our approach by measuring the deployment times of onion services.
Abstract
Abbuchen von Geld im “Vorbeigehen”, Auslesen/Kopieren von Karten durch kurzes Auflegen eines Smartphone, Mithören von Transaktionen aus der Ferne; all das sind häufig genannte Angriffsszenarien im Zusammenhang mit Near-Field-Communication-(NFC-)Zahlungen. Doch stellen diese Szenarien ein ernsthaftes Sicherheitsrisiko dar? Gibt es weitere kritische Sicherheitsaspekte? Unterscheiden sich Zahlungen mit der Plastikkarte dahingehend von jenen mit dem Smartphone? Der nachfolgende Beitrag gibt einen Überblick über NFC-Zahlungen und deren potenzielle Sicherheitsrisiken.
Abstract
The Digital Shadow project, developed at the Institute for Networks and Security, requires verifiable trust in many areas in order to recognize and authorize users based on their biometric data. This trust should give the user the opportunity to check the correctness of the system quickly and easily before he or she provides the system with biometric data. This master’s thesis deals with the existing tools that can create such trust. The implemented system combines these tools in order to identify users in the Digital Shadow network with their biometric data. Incorrect use of this sensitive data should be excluded and the smallest possible set of metadata should be generated. Based on the implemented system, we discuss the properties of a trustworthy environment for software and explain the necessary framework requirements.
Abstract
In current single sign-on authentication schemes on the web, users are required to interact with identity providers securely to set up authentication data during a registration phase and receive a token (credential) for future access to services and applications. This type of interaction can make authentication schemes challenging in terms of security and availability. From a security perspective, a main threat is theft of authentication reference data stored with identity providers. An adversary could easily abuse such data to mount an offline dictionary attack for obtaining the underlying password or biometric. From a privacy perspective, identity providers are able to track user activity and control sensitive user data. In terms of availability, users rely on trusted third-party servers that need to be available during authentication. We propose a novel decentralized privacy-preserving single sign-on scheme through the Decentralized Anonymous Multi-Factor Authentication (DAMFA), a new authentication scheme where identity providers no longer require sensitive user data and can no longer track individual user activity. Moreover, our protocol eliminates dependence on an always-on identity provider during user authentication, allowing service providers to authenticate users at any time without interacting with the identity provider. Our approach builds on threshold oblivious pseudorandom functions (TOPRF) to improve resistance against offline attacks and uses a distributed transaction ledger to improve availability. We prove the security of DAMFA in the universal composibility (UC) model by defining a UC definition (ideal functionality) for DAMFA and formally proving the security of our scheme via ideal-real simulation. Finally, we demonstrate the practicability of our proposed scheme through a prototype implementation.
2021
Abstract
This bachelor thesis looks at the development of securely exporting single conversations in IM for Android apps, specifically for the Private Messenger Signal-Android, a cross-platform centralized encrypted messaging service that is free and AOSP. Initially, this paper looks at the existing Messenger apps and their chat export tools that allow users to obtain similar results as the designed feature. This document evaluates the user’s risks when trusting these other services but does not reflect much about its components or technological aspects. The present paper ignores OSs and platforms other than Android, considering that the main point of the work lies in the security of the IM app and not on the device reliability.
The writing re-examines the characteristics of different Messenger apps and presents a simple solution for exporting single chats in Signal. In this context, some other options as apps that present alternatives to the designed choice were researched. The proposal of the chat export feature for Signal-Android begins in the next section, explaining mostly primary functionalities for exporting chats, including the functionality analysis, the solution design, and the implementation. The following chapter focuses on developing this new function added to the Signal-Android APP and compares the obtained outcomes with other results of a similar solution. The final part summarizes the project, explains the problems and reviews possible improvements and subsequent development steps.
Event
Abstract
Every distributed system needs some way to list its current participants. The Tor network’s consensus is one way of tackling this challenge. But creating a shared list of participants and their properties without a central authority is a challenging task, especially if the system is constantly targeted by state level attackers. This work carefully examines the Tor consensuses created in the last two years, identifies weaknesses that did already impact users and proposes improvements to strengthen the Tor consensus in the future. Our results show undocumented voting behavior by directory authorities and suspicious groups of relays that try to conceal the fact that they are all operated by the same entity.
Abstract
This master’s thesis engages the reverse-engineering of the Da Jian Innovation low level Wi-Fi protocol. With deductive reasoning we try to establish logical connections between drone control instructions and their corresponding sent network packets. We further cluster UDP packets based on their payload length and execute bit-precise reasoning for payloads of interest. We unveil the protocol’s core structure which enables pixel-perfect camera-feed and telemetry data extraction. Finally, we introduce a proprietary software solution to capture, analyse and post-process drone operation relevant network packets.
Abstract
A service that has to interact with multiple potential biometric sensors, needs to share information about an individual with them. Although it is possible that there will no interaction with such an sensor, the data is shared nevertheless. Every shared information about biometric data of an individual could lead to a potential leakage of sensitive data. To prevent this we introduce fuzzy hash which prevents this problem by generating a hash that cannot be tracked back to the original biometric data. Still, this hash can be compared against other embeddings which allows the sensor to interact with the correct service without an interactive protocol.
Event
Abstract
With the deprecation of V2 onion services right around the corner, it is a good time to talk about V3 onion services. This post will discuss the most important privacy improvements provided by V3 onion services as well as their limitations. Aware of those limitations, our research group at the Institute of Network and Security at JKU Linz conducted an experiment that extracts information about how V3 onion services are being used from the Tor network.
Abstract
In order to increase the accuracy of SOTA face recognition pipelines, intuitively it would make sense to not only use a single image as reference embedding (template), but combine multipĺe embeddings from different images (different pose, angle, setting) to create a more accurate and robust template. In order to objectively evaluate our different proposed combinations of embeddings, we would benefit from having a single metric to tell how well the template is performing on our dataset. For certain applications (e.g. opening doors) a low false-positive rate is required, while in other situations (e.g. sensor contacting PIA’s) a low false-negative rate is required. Therefore, in this document we try to balance these different approaches by using the harmonic mean of recall and precision.
Event
Abstract
Tor onion services are a challenging research topic because they were designed to reveal as little metadata as possible which makes it difficult to collect information about them. In order to improve and extend privacy protecting technologies, it is important to understand how they are used in real world scenarios. We discuss the difficulties associated with obtaining statistics about V3 onion services and present a way to monitor V3 onion services in the current Tor network that enables us to derive statistically significant information about them without compromising the privacy of individual Tor users. This allows us to estimate the number of currently deployed V3 onion services along with interesting conclusions on how and why onion services are used.
Abstract
Mobile device authentication has been a highly active research topic for over 10 years, with a vast range of methods proposed and analyzed. In related areas, such as secure channel protocols, remote authentication, or desktop user authentication, strong, systematic, and increasingly formal threat models have been established and are used to qualitatively compare different methods. However, the analysis of mobile device authentication is often based on weak adversary models, suggesting overly optimistic results on their respective security. In this article, we introduce a new classification of adversaries to better analyze and compare mobile device authentication methods. We apply this classification to a systematic literature survey. The survey shows that security is still an afterthought and that most proposed protocols lack a comprehensive security analysis. The proposed classification of adversaries provides a strong and practical adversary model that offers a comparable and transparent classification of security properties in mobile device authentication.
Abstract
This work proposes a modular automation toolchain to analyze the current state and measure over-time improvements of reproducibility of the Android Open Source Project (AOSP). While perfect bit-by-bit equality of binary artifacts would be a desirable goal to permit independent verification if binary build artifacts really are the result of building a specific state of source code, this form of reproducibility is often not (yet) achievable in practice. In fact, binary artifacts may require to be designed in a way that makes it impossible to simply detach all sources of non-determinism and all non-reproducible build inputs (such as private signing keys). We introduce “accountable builds” as a form of reproducibility that allows such legitimate deviations from 100 percent bit-by-bit equality. Based on our framework that builds AOSP with its native build system, automatically compares artifacts, and computes difference scores, we perform a detailed analysis of discovered differences, identify typical accountable changes, and analyze current major issues that lead to non-reproducibility. While we find that AOSP currently builds neither fully reproducible nor fully accountable, we derive a trivial weighted change metric to continuously monitor changes in reproducibility over time.
Abstract
This document tries to find simple heuristics of images of faces to differentiate between successful and unsuccessful face recognition. Intuitively, the camera-face angle might play an important role: In full-frontal images a lot of information is contained, in contrast to full-profile images where at least half of the face is hidden. Therefore, as a proxy for this angle we will focus on these metrics: (1) distance between the eyes, relative to the face-width, (2) distance between the center of the eye to the mouth, relative to the faceheight, and (3) face size.
Event
Abstract
Tor onion services utilize the Tor network to enable incoming connections on a device without disclosing its network location. Decentralized systems with extended privacy requirements like metadata-avoiding messengers typically rely on onion services. However, a long-lived onion service address can itself be abused as identifying metadata. Replacing static onion services with dynamic short-lived onion services may by a way to avoid such metadata leakage. This work evaluates the feasibility of short-lived dynamically generated onion services in decentralized systems. We show, based on a detailed performance analysis of the onion service deployment process, that dynamic onion services are already feasible for peer-to-peer communication in certain scenarios.
Event
Abstract
Most state-of-the-art face detection algorithms are usually trained with full-face pictures, without any occlusions. The first novel contribution of this paper is an analysis of the accuracy of three off-the-shelf face detection algorithms (MTCNN, Retinaface, and DLIB) on occluded faces. In order to determine the importance of different facial parts, the face detection accuracy is evaluated in two settings: Firstly, we automatically modify the CFP dataset and remove different areas of each face: We overlay a grid over each face and remove one cell at a time. Similarly, we overlay a rectangle over the main landmarks of a face – eye(s), nose and mouth. Furthermore, we resemble a face mask by overlaying a rectangle starting from the bottom of the face. Secondly, we test the performance of the algorithms on people with real-world face masks. The second contribution of this paper is the discovery of a previously unknown behaviour of the widely used MTCNN face detection algorithm – if there is a face inside another face, MTCNN does not detect the larger face.
Abstract
In our digitized society, in which different organizations attempt to control and monitor Internet use, anonymity is one of the most desired properties that ensures privacy on the Internet. One of the technologies that can be used to provide anonymity is the anonymization network Tor, which obfuscates the connection data of communications in a way that its initiator cannot be identified. However, since this only protects the initiator without protecting further communication participants, Tor Onion Services were developed, which ensure the anonymity of both the sender and the recipient. Due to the metadata created when using these Onion Services, adversaries could still be able to identify participants in a communication by using additional sources of information.
In the course of this thesis, a protocol was developed that reduces metadata leading to the identification of communication participants as far as possible. For this purpose, a two-staged addressing scheme was employed that allows users to obtain an individual address for a service via its public service address, which cannot be traced back. To prove its technical feasibility, a prototype of the protocol was implemented based on Python. Since latency is one of the decisive criteria in the usage decision of services, a performance analysis was carried out to measure the provisioning time of onion services, since this has a significant influence on the duration of address issuing. The architecture and procedure for this had to be specially designed and implemented, as at the time of writing no research existed on the provisioning time of onion services in their current version.
A statistical analysis of the results revealed that the duration of issuing individual addresses using the proposed protocol exceeds the acceptance threshold of users with 6.35 seconds. However, this does not apply to service access using the individual address, implying that the use of the protocol is possible after improving the address issuance procedure. This would reduce the metadata when accessing an Onion service and thus help improve the anonymity of communication participants.
Abstract
Various forms of digital identity increasingly act as the basis for interactions in the “real” physical world. While transactions such as unlocking physical doors, verifying an individual’s minimum age, or proving possession of a driving license or vaccination status without carrying any form of physical identity document or trusted mobile device could be easily facilitated through biometric records stored in centralized databases, this approach would also trivially enable mass surveillance, tracking, and censorship/denial of individual identities.
Towards a vision of decentralized, mobile, private authentication for physical world transactions, we propose a threat model and requirements for future systems. Although it is yet unclear if all threats listed in this paper can be addressed in a single system design, we propose this first draft of a model to compare and contrast different future approaches and inform both the systematic academic analysis as well as a public opinion discussion on security and privacy requirements for upcoming digital identity systems.
Abstract
This Bachelor Thesis is about the development of the secure export of chat history from the messenger app Wire. Wire is an end-to-end encrypting audio/video/chat service for various platforms. The aim of this Thesis is to expand the open source Android client in such a way that a secure export of an entire (group-) conversation, including the media it contains, is possible. Additional reference is given for restrictions such as time-limited messages. The export is done as a Zip file, which contains the messages in an XML document as well as the media files. Additionally, an HTML-Viewer can be included to view the exported data.
Abstract
Android is the most widely deployed end-user focused operating system. With its growing set of use cases encompassing communication, navigation, media consumption, entertainment, finance, health, and access to sensors, actuators, cameras, or microphones, its underlying security model needs to address a host of practical threats in a wide variety of scenarios while being useful to non-security experts. The model needs to strike a difficult balance between security, privacy, and usability for end users, assurances for app developers, and system performance under tight hardware constraints. While many of the underlying design principles have implicitly informed the overall system architecture, access control mechanisms, and mitigation techniques, the Android security model has previously not been formally published. This article aims to both document the abstract model and discuss its implications. Based on a definition of the threat model and Android ecosystem context in which it operates, we analyze how the different security measures in past and current Android implementations work together to mitigate these threats. There are some special cases in applying the security model, and we discuss such deliberate deviations from the abstract model.
Abstract
Face recognition pipelines are under active development, with many new publications every year. The goal of this report is to give an overview a modern pipeline and recommend a state-of-the-art approach while optimizing for accuracy and performance on low-end hardware, such as a Jetson Nano.
Abstract
Monitoring the activities of onion services by deploying multiple HSDir nodes has been done repeatedly in the past. With v3 onion services, Tor mitigated such attacks by blinding the public keys of onion services before uploading them. This effectively prevents the collection of onion addresses, but it does not prevent the collection blinded public key uploads and downloads, which provide statistical insight into how onion services are being used. Additionally, it is possible to identify and link blinded keys derived from well-known onion services, providing a solid estimate on how often they are accessed. This report presents our setup to collect statistically significant information on v3 onion service usage without compromising the privacy of Tor users.
Abstract
This work focuses on methods to capture and analyze data transmitted by Wireless Local Area Network (WLAN) clients in order to track them. This includes evaluation of methods where control of the Access Point (AP) infrastructure is not needed and clients do not need to be connected to a WLAN network. This mainly involves data in probe requests which are transmitted by clients when actively searching for WLAN APs. To evaluate this in a real world scenario a setup consisting of multiple distributed capture devices and a central analysis system is introduced. The captured data is analyzed to verify theoretical concepts. There is still a big part of WLAN client devices that leak lists of stored SSID values when actively scanning for WLAN networks. MAC address randomization helps to protect privacy if enabled. User identities for EAP authentication however are still leaked in default configuration by all major operating systems. Finally some extension ideas and current trends and developments are presented.
2020
Event
Abstract
Token-based authentication is usually applied to enable single-sign-on on the web. In current authentication schemes, users are required to interact with identity providers securely to set up authentication data during a registration phase and receive a token (credential) for future accesses to various services and applications. This type of interaction can make authentication schemes challenging in terms of security and usability. From a security point of view, one of the main threats is the compromisation of identity providers. An adversary who compromises the authentication data (password or biometric) stored with the identity provider can mount an offline dictionary attack. Furthermore, the identity provider might be able to track user activity and control sensitive user data. In terms of usability, users always need a trusted server to be online and available while authenticating to a service provider.
In this paper, we propose a new Decentralized Anonymous Multi-Factor Authentication (DAMFA) scheme where the process of user authentication no longer depends on a trusted third party (the identity provider). Also, service and identity providers do not gain access to sensitive user data and cannot track individual user activity. Our protocol allows service providers to authenticate users at any time without interacting with the identity provider.Our approach builds on a Threshold Oblivious Pseudorandom Function (TOPRF) to improve resistance to offline attacks and uses a distributed transaction ledger to improve usability. We demonstrate practicability of our proposed scheme through a prototype.
Abstract
Confidentiality of data stored on mobile devices depends on one critical security boundary in case of physical access, the device’s lockscreen. If an adversary is able to satisfy this lockscreen challenge, either through coercion (e.g. border control or customs check) or due to their close relationship to the victim (e.g. intimate partner abuse), private data is no longer protected. Therefore, a solution is necessary that renders secrets not only inaccessible, but allows to plausibly deny their sole existence. This thesis proposes an app-based system that hides sensitive apps within Android’s work profile, with a strong focus on usability. It introduces a lockdown mode that can be triggered inconspicuously from the device’s lockscreen by entering a wrong PIN for example. Usability, security and current limitations of this approach are analyzed in detail.
Abstract
Reproducible builds enable the creation of bit identical artifacts by performing a fully deterministic build process. This is especially desireable for any open source project, including Android Open Source Project (AOSP). Initially we cover reproducible builds in general and give an overview of the problem space and typical solutions. Moving forward we present Simple Opinionated AOSP builds by an external Party (SOAP), a simple suite of shell scripts used to perform AOSP builds and compare the resulting artifacts against Google references. This is utulized to create a detailed report of the differences. The qualitative part of this report attempts to find insight into the origin of differences, while the quantitative provides a quick summary.
Abstract
Mobile device authentication has been a highly active research topic for over 10 years, with a vast range of methods having been proposed and analyzed. In related areas such as secure channel protocols, remote authentication, or desktop user authentication, strong, systematic, and increasingly formal threat models have already been established and are used to qualitatively and quantitatively compare different methods. Unfortunately, the analysis of mobile device authentication is often based on weak adversary models, suggesting overly optimistic results on their respective security. In this article, we first introduce a new classification of adversaries to better analyze and compare mobile device authentication methods. We then apply this classification to a systematic literature survey. The survey shows that security is still an afterthought and that most proposed protocols lack a comprehensive security analysis. Our proposed classification of adversaries provides a strong uniform adversary model that can offer a comparable and transparent classification of security properties in mobile device authentication methods.
Event
Abstract
We are very pleased to welcome you to the 2nd ACM Workshop on Wireless Security and Machine Learning. This year’s WiseML is a virtual workshop and we are both excited to try out this workshop format and regretful not to be able to welcome you in the beautiful city of Linz, Austria, due to the ongoing COVID-19 pandemic. ACM WiseML 2020 continues to be the premier venue to bring together members of the AI/ML, privacy, security, wireless communications and networking communities from around the world, and to offer them the opportunity to share their latest research findings in these emerging and critical areas, as well as to exchange ideas and foster research collaborations, in order to further advance the state-of-the-art in security techniques, architectures, and algorithms for AI/ML in wireless communications. The program will be presented online in a single track. WiseML 2020 will be open at no extra cost to everyone and we are trying out new formats such as a mixture of live streams, pre-recorded talks, and interactive Q/A sessions.
Event
Abstract
We are very pleased to welcome you to the 13th ACM Conference on Security and Privacy in Wireless and Mobile Networks. This year’s WiSec marks the first virtual WiSec conference and we are both excited to try out this conference format and regretful to not be able to welcome you in the beautiful city of Linz, Austria, due to the ongoing SARS-CoV-2 pandemic. ACM WiSec 2020 continues to be the premier venue for research dedicated to all aspects of security and privacy in wireless and mobile networks, their systems, and their applications. The program will be presented online in a single track, along with a poster and demonstration session. WiSec 2020 will be open at no extra cost to everyone and we are trying out new formats such as a mixture of live streams, pre-recorded talks, and interactive Q/A sessions.
Abstract
Methods for recognizing people are both heavily researched presently and widely used in practice, for example by government and police. People can be recognized using various methods, such as face, finger and iris recognition, which differ in terms of requirements massively. Gait recognition allows identifying people despite large distances, hidden body parts and with any camera angle – which makes it a naturally attractive method of identifying people. This approach uses the uniqueness of gait information in every person. Most of the current literature focuses on hand-crafting features, such as step and stride length, cadence, speed and hip angle. This thesis proposes a way of performing gait recognition using neural networks. Hence, features have not to be specified manually anymore, while also boosting current state-of-the-art accuracy of being able to recognize people. First, in order to increase the robustness against cloth-changes, the silhouette from a person is extracted using Mask R-CNN. In order to capture spatial information about the subject, a convolutional neural network creates a gait-embedding based on each silhouette. To augment the quality, the next step is to take temporal information into account, using a long short-term memory network which uses the single-picture-based embedding of multiple images and computes its own, enhanced, embedding. Last but not least, the network should not be trained for every new person from scratch. Thus, a Siamese network is trained to be able to distinguish two people, which the network has (probably) never seen before.
Abstract
Contact tracing is one of the main approaches widely proposed for dealing with the current, global SARS-CoV-2 crisis. As manual contact tracing is error-prone and doesn’t scale, tools for automated contact tracing, mainly through smart phones, are being developed and tested. While their effectiveness—also in terms of potentially replacing other, more restrictive measures to control the spread of the virus—has not been fully proven yet, it is critically important to consider their privacy implications from the start. Deploying such tools quickly at mass scale means that early design choices may not be changeable in the future, and potential abuse of such technology for mass surveillance and control needs to be prevented by their own architecture.
Many different implementations are currently being developed, including international projects like PEPP-PT/DP-3T and national efforts like the “Stopp Corona” app published by the Austrian Red Cross. In this report, we analyze an independent implementation called NOVID20 that aims to provide a common framework for on-device contact tracing embeddable in different apps. That is, NOVID20 is an SDK and not a complete app in itself. The initial code drop on Github was released on April 6, 2020, without specific documentation on the intent or structure of the code itself. All our analysis is based on the Android version of this open source code alone. Given the time period, our analysis is neither comprehensive nor formal, but summarizes a first impression of the code.
NOVID20 follows a reasonable privacy design by exchanging only pseudonyms between the phones in physical proximity and recording them locally on-device. However, there is some room for improvement: (a) pseudonyms should be generated randomly on the phone, and not on the server side; (b) transmitted pseudonyms should be frequently rotated to avoid potential correlation; (c) old records should automatically be deleted after the expunge period; (d) absolute location tracking, while handled separately from physical proximity and only optionally released, can be problematic depending on its use—absolute location data must be protected with additional anonymization measures such as Differential Privacy, which are left to the application/server and may, therefore, not be implemented correctly; and (e) device analytics data, while helpful during development and testing, should be removed for real deployments. Our report gives more detailed recommendations on how this may be achieved.
We explicitly note that all of these points can be fixed based on the current design, and we thank the NOVID20 team for openly releasing their code, which made this analysis possible in a shorttime window.
Event
Abstract
How can we use digital identity for authentication in the physical world without compromising user privacy? Enabling individuals to – for example – use public transport and other payment/ticketing applications, access computing resources on public terminals, or even cross country borders without carrying any form of physical identity document or trusted mobile device is an important open question. Moving towards such a device-free infrastructure-based authentication could be easily facilitated by centralized databases with full biometric records of all individuals, authenticating and therefore tracking people in all their interactions in both the digital and physical world. However, such centralized tracking does not seen compatible with fundamental human rights to data privacy. We therefore propose a fully decentralized approach to digital user authentication in the physical world, giving each individual better control over their interactions and data traces they leave.
In project Digidow, we assign each individual in the physical world with a personal identity agent (PIA) in the digital world, facilitating their interactions with purely digital or digitally mediated services in both worlds. We have two major issues to overcome. The first is a problem of massive scale, moving from current users of digital identity to the whole global population as the potential target group. The second is even more fundamental: by moving from trusted physical documents or devices and centralized databases to a fully decentralized and infrastructure-based approach, we remove the currently essential elements of trust. In this poster, we present a system architecture to enable trustworthy distributed authentication and a simple, specific scenario to benchmark an initial prototype that is currently under development. We hope to engage with the NDSS community to both present the problem statement and receive early feedback on the current architecture, additional scenarios and stakeholders, as well as international conditions for practical deployment.
2019
Abstract
The so called Digidow Project aims to provide a decentralized solution for digital identity management. A key feature is to provide a service for authentication along with the identification of individual persons based on biometric features.
In the center of this idea a so called personal agent should provide this decentralized functionality for each individual user. The sensitive nature of the data this agent handles requires a special level of security standards on both the implementation and surrounding system.
This master thesis evaluates the programming language Rust as potential platform choice for the personal agent. We discuss the features Rust has been chosen for and which additional frameworks where selected and used to create the prototype we used for the evaluation. Furthermore, we dive into details about our prototype and present the implemented concepts. Moreover, we test our implementation and discuss our achievements, like isolated access to the hard drive, the developed concept behind the architecture and how incoming data is verified. Finally, we are going to discuss how future work can build on the introduced and existing concepts.
Event
Abstract
The Digidow architecture is envisioned to tie digital identities to physical interactions using biometric information without the need for a central collection of biometric templates. A key component of the architecture is the distributed service discovery, for establishing a secure and private connection between a prover, a verifier and a sensor, if none of them knows the others ahead of time. In this paper we analyze the requirements of the service discovery with regard to functionality and privacy. Based on typical use-cases we evaluate the advantages and disadvantages of letting each of the actors be the initiator of the discovery process. Finally, we outline existing technologies could be leveraged to achieve our requirements.
Abstract
The prediction of future locations can be useful in various settings, one being the authentication process of a person. In this thesis, we perform the prediction of next places with the help of a HMM. We focus on models with a discrete state space and thus need to discretise the data. This is done by pre-processing the raw, continuous location data in two steps. The first step is the extraction of stay-points, i.e. regions in which a person spends a given time period at. In the second step, multiple stay-points are grouped with the clustering algorithm DBSCAN to form significant places. After pre-processing, we train a HMM with a state and observation space that correspond to the extracted significant places. Based on the previously observed location, our model predicts the next place for a person. In order to find good models for next place prediction, we did experiments with two datasets. The first one is the Geolife GPS trajectory dataset from Microsoft, which consists of GPS traces. The second dataset was self-collected and contains additional data obtained from WiFi and cell towers. Our final model achieves a validation accuracy higher than 0.95 on both datasets. However, a prediction accuracy reaching from 0.8 to 0.99 of a model that solely predicts noise as its future location, leads us to the conclusion that the datasets, as well as the pre-processing step need further refinements for our HMM to encapsulate more valuable information.