OpenID for European Digital Identity
An architectural analysis of user-centric identity management
Recent European efforts around digital identity – the EUDI regulation and its OpenID architecture – aim high to provide an EU-wide authentication framework. However, its current technical and legislative architecture are based on a limited conceptualization of identity. None of the legal and technical texts involved explicitly define this central term; and their implicit model of the concept does not go beyond a digitalization of identity cards and similar documents. Based on several other standards, we therefore propose a deeper, explicit definition. Grounded in this definition, we identify several issues in the design of OpenID4VCI and OpenID4VP: (1) available credentials are limited to static, preset configurations, flexible in packaging format but not in semantic content; (2) querying claims in credentials happens through a custom, format-specific language; and (3) credentials can only be about a (single) principal user of a service, who must (synchronously) interact for each transaction. These problems limit the kind of credentials that can be issued, the way specific information can be requested, and the capabilities for (dynamic, asynchronous) automation of these transactions. Overall, this restricts the application of these specifications to classic scenarios involving a small number of well-known authoritative documents over which a handful of questions is repeatedly formulated. Moreover, the functional requirements for this limited set of use cases are already fully supported by existing solutions, such as the OpenID Connect ecosystem, which puts the need for the new OpenID specifications into question. We therefore also look into the non-functional advantages claimed by OpenID’s new trust model: an increase in control, privacy, and portability of personal information. However, on none of these measures the new generation of specifications clearly surpasses the existing one. Not only the technical choices limit the capabilities of the EUDI framework; also the legislation itself cannot accommodate the promise of self-sovereign identity. In particular, we criticize the introduction of institutionalized trusted lists, and discuss their economical and political risks. Their potential to decline into an exclusory, recentralized ecosystem endangers the vision of a user-oriented identity management in which individuals are in charge. Instead, the consequences might severely restrict people in what they can do with their personal information, and risk increased linkability and monitoring. In anticipation of revisions to the EUDI regulations, we suggest several technical alternatives that overcome some of the issues with the architecture of OpenID. In particular, OAuth’s UMA extension and its A4DS profile, as well as their integration in GNAP, are worth looking into. Future research into uniform query (meta-)languages is needed to address the heterogeneity of attestations and providers.
Executive summary
The EUDI regulation aims to create an EU-wide authentication framework promoting self-sovereign identity, but are constructed around a limited set of use cases. While the new generation of OpenID specifications provides an architecture to implement these scenarios, existing decentralized alternatives are equally capable of handling them. Lacking a thorough conceptualization of (digital) identity, the EUDI regulation and its OpenID architecture limit their possible applications, and therefore fail to deliver the promised increase in control, privacy, and portability of personal information.
Aggregating terminology from multiple international legal and technical standards, we define (pseudonymous) identity as a subset of measurable characteristics of an entity that, when taken together, are sufficient to represent and distinguish that entity within a given domain of application. However, for practically all use cases, it suffices to identify context-dependent roles – a form of partial identity. Interpreting identification as the exchange of attributes as claims about a subject, we characterize the process of ‘authentication’ as the verification of authenticity of such claims – in particular their origin and integrity – based on trust in some authority (i.e., the issuer–holder–verifier model). It follows from these definitions that credentials are merely certificates: documents attesting to the truth of certain stated facts.
Based on these definitions, we find a mismatch between the ‘paradigm-shifting’ promises of EUDI and OpenID, and the actual capabilities of their combined framework. The OpenID architecture, on the one hand, is
- based on a history of software design that disregards best practices in internet security and interoperability;
- limited to static credential types, and an inflexible, format-dependent query language; and
- not applicable to dynamic or automated use cases.
Even within classic scenarios of credential exchange, the trust model underlying OpenID4VCI and OpenID4VP offers no advantages over earlier solutions, such as the OpenID Connect ecosystem:
- OpenID’s wallet offers offline availability, but does not increase portability of personal information in the usual Bring-Your-Own-Identity sense, nor any extra control through informed consent.
- Selective disclosure seems like a big step forward in user control over personal data; were it not that this form of data minimization predominantly applies to wallet-based solutions, and is in fact not necessary in many other decentralized identity models.
- Despite the change of interactions between issuer, holder, and verifier, privacy issues merely shift from the identity provider to the wallet provider – or not at all, when following all the recommended security precautions.
The EUDI regulation, on the other hand, remains equally far from achieving its promise of self-sovereign identity. The use of institutionalized trusted lists makes participation in the identity market dependent on economical and political incentives. This risks a de facto recentralization of digital identity that not only weakens the security and neutrality of the Web, but also makes the framework vulnerable to vendor lock-ins, data correlation, and government abuse.
In anticipation of regulatory changes, we suggest alternatives that aim at a broader interpretation of authentication and identity. The sound foundation provided by W3C’s decentralized identifiers (DID) and verifiable credential model (VC) can be complemented by frameworks based directly on OAuth. Ongoing research into the asynchronous and dynamic capabilities of User-Managed Access (UMA), Authorization for Data Spaces (A4DS), and the Grant Negotiation and Authorization Protocol (GNAP) are prime candidates. Future research is still needed into uniform query (meta-)languages, and support for more general attestation documents.
Introduction
The concept of identity plays a big role in the fields of Data Spaces1 (DS) and Identity and Access Management (IAM): from the personal information of an individual human being, over the information collected about legal entities, to the processing of any of the former, either through traditional methods or involving artificial intelligence.2 For being so important, however, definitions of the term ‘identity’ are surprisingly rare in legal and technical documents related to these fields.
In this paper, we look into the impact this has on the recent developments around the European Digital Identity (EUDI) framework, and its technical implementation through OpenID specifications. In Section 1, we present the state of the art on relevant concepts, to establish a sound foundation to start from: we attempt to aggregate a workable definition of the term ‘identity’ and link it to the concepts of anonymity and pseudonymity (1.1); we define ‘authentication’ and explain the trust model around certification authorities (1.2); and we look into the difference between certificates and credentials (1.3).
Based on this understanding, we assess the technical capabilities of OpenID’s specifications in Section 2: we discuss the design choices inherited from OIDC (2.1), and point out the limitations of OpenID4VCI (2.2) and OpenID4VP (2.3). Given the lack of additional functionality, we try to find a rationale for OpenID’s new design by looking into its non-functional characteristics Section 3. We conclude, however, that it does not offer any practical increase in portability (3.1), control (3.2) or privacy (3.3).
Before offering a number of alternatives to OpenID’s architecture in Section 5, we look into the relevant aspects of the EUDI legislation itself in Section 4: we explain the workings of trusted lists (4.1); discuss an example of the effects of their institutionalization (4.2); and point out the implications of the economical and political incentives they give rise to (4.3).
1 The identity of identity
Looking at the European Digital Identity (EUDI) legislation Regulation 2024/1183 [7], as well as its predecessor Regulation 910/2014 (eIDAS) [6], and its Architecture and reference framework (ARF) [8; 9], it is unclear precisely what is meant by the term ‘identity’; neither of them provides an explicit definition. Likewise for many of the technical specifications by Standards Development Organizations (SDOs),3 which play a major role in the framework’s architecture: W3C’s Decentralized Identifiers (DID) [10], Verifiable credentials data model (VC) [11], [12], and Federated credential management (FedCM) [13]; OpenID’s OpenID for verifiable credential issuance (OpenID4VCI) [14], and OpenID for verifiable presentations (OpenID4VP) [15]. Older specifications, on which some of these more recent standards are based, merely contain a brief description of ‘identity’: “[a] set of attributes related to an entity,” in OpenID Connect (OIDC) [3]; and “[the] essence of an entity … described by one’s characteristics,” in OASIS’s Glossary for SAML (SAML) [16] – referencing the Merriam-Webster dictionary. These are far from workable definitions for core specifications around this topic.
Note that the aim of this paper is not to find or attempt a universal, one-size-fits-all definition of ‘identity’. Even more so than many other concepts, its meaning heavily depends on the domain of governance (legal, political, technical) and the sociocultural context. What we point out is rather the lack of any definition of the term, neither given nor referenced, in the many texts pertaining to this topic – while still employing the term in other definitions. In particular, when crossing from one domain to another – e.g., from legalislation to technical implementation – lacking a (matching) definition of this core concept poses the risk of operationalizing a system that does not correspond to the original intentions.
We therefore turn to meta-standards and glossaries of those same SDOs. Neither W3C’s [17] [17], nor IETF’s RFC 1983 (FYI 18) ([18]) [18], or its RFC 7642 ([19]) [19], contain a definition of ‘identity’. On the other hand, both IETF’s RFC 4949 (FYI 36) ([20]) [20], and RFC 6973 ([21]) [21], as well as OASIS’s Identity Metasystem Interoperability [22], provide a somewhat detailed definition of the term. Looking to expand our field of search, we find several – somewhat outdated – documents giving an overview of the state of the art (at the time), including the references mentioned above: a ‘living paper’ regularly updated at the Technical University of Delft [23], the reports of the Future of Identity in the Information Society (FIDIS) project [24; 25],4 and the extensive Security glossary [26] and Privacy glossary [27] of Wheeler & Wheeler. In their bibliographies, we discover a number of additional sources that provide much more elaborate definitions of ‘identity’ – and numerous related terms – predominantly published by U.S. institutions: OECD/LEGAL/0491 [28], ISO/IEC 24760-1:2025 (A framework for identity management (part 1)) [29], ISO/IEC 24765:2017 (Systems and software engineering vocabulary) [30], NIST SP 800-63-4 ([5]) [5], NIST SP 800-103 (IPD) (An ontology of identity credentials (part 1)) [31], NIST FIPS 200 (Minimum security requirements) [32], NIST FIPS 201-3 (Personal Identity Verification) [33], and CNSSI No. 4009 ([4]) [4].5 We discuss these in the following section.
1.1 Characteristics of (id)entities
Of the handful of definitions we found, most are stated in terms of entities. We paraphrase: an entity is anything that has measureable attributes (characteristics) by which it can be represented (named, described) and distinguished (recognized) in a relevant domain of applicability [20; 29; 30]. While most definitions of the term attribute turn out to be circular or self-referential,6 we manage to aggregate the following: an attribute is any unique piece of information that associates an entity with a measurable value of a particular type [4; 6; 20; 28; 30; 36]. An identity is then any well-defined subset of those characteristics, that together suffice to actually represent that entity and uniquely distinguish it from any other entity within a concrete context (domain) [4; 5; 21; 28; 33].7
Since any set of attributes, of which several entities share the same, can be extended to a (minimal) distinguishing superset, we also call sets of attributes in general partial identities [29].8 Interestingly, partial identity immediately implies (partial9) anonymity: “a state of an individual in which an observer … cannot identify [it] within a set of other individuals” [21]. Such a set of individuals, which share “the same attributes, making them indistinguishable from each other [for a particular observer],” is called an anonymity set [21]. It is clear that any nontrivial entity has many partial identities, and therefore many different anonymity sets.10
A number of sources emphasize the distinction between identities and identifiers [16; 37], but they fail to point out the precise difference: both are unique sets of characteristics. While identifiers are often atomic strings (e.g., labels, serial numbers, indexes), they can also be more complex or combined attribute structures (e.g., names, addresses, bibliographic references) [4; 20; 30]. If anything, they are minimal (i.e., irreducible) identities, in which the removal of a single attribute reduces it to a mere partial identity. This aligns with the recurring view that identifiers represent (other) identities – larger (super)sets of attributes – rather than entities [3; 4; 29; 30; 33; 37].
An entity potentially has multiple proper (i.e., non-partial) identities too, both in different contexts as within a single one. We use the term principal (user) to refer to the particular identity, by which the active party11 is known in the context of a specific (user) session [4; 20; 21; 29].12 When a principal identity can be “associated with multiple interactions” (with the same entity) [5] – because they contain at least “the minimal identity information sufficient to … establish it as a link” [29] – it is also called a persona [4; 11], or pseudonym.13 The latter term is typically used when it concerns a minimal persona: a privacy-preserving identifier, which does not let the verifier infer anything else regarding the entity – in particular not the identities by which the entity is known to other parties [3; 5; 16].14
1.2 On trust in authorities
The act of presenting identity information to a system – making claims about the attributes of an entity (the subject)15 – and the system subsequently validating16 the collected information in order to recognize the represented entity – as (uniquely) distinct within a context – is called identification [4; 5; 20; 29]. Authentication, on the other hand, is the more formal process of establishing a sufficient level of assurance in the authenticity17 [3; 16; 28; 29; 33] of an entity. It involves verifying18 the origin (i.e., source) and integrity19 of information [4; 7; 20; 40; 42], in order to determine the binding between the presented claims [3; 5; 16; 18; 20; 29; 32; 39; 40; 42; 43].20 In the context of (user) identification, authentication determines “the degree to which the identity … can be proved to be the one claimed” [30], typically by ensuring that the entity controls (e.g., is, possesses, or knows) a valid token (authenticator), bound to their account – e.g., a private key corresponding to a registered public key – to demonstrate that they are associated with that account [5; 44].21
Assurance about the authenticity of information can be based on trust in third-party (certification) authorities (CAs) – the archetypal trust service provider [6] – that vouch for its integrity and accuracy. These trusted authorities are often the primary or authoritative sources, who manage the life cycle of the information, keeping it accurate and up-to-date [39];22 as well as the issuers, who generate (signed) documents (certificates) asserting the information, and provide them to principals [3; 5; 6; 11; 14; 15; 20; 29; 30; 33]. When the asserted information is about the principal themself, we also call such authorities credential service providers (CSPs) or identity providers (IDPs), and the certificates they issue (verifiable) credentials – or, to use the EUDI term, (electronic) attestations of attributes (EAAs) [7; 8; 16; 21; 28; 29].23
Having received a certificate, its holder – the principal to whom it was issued and who controls it [31] – can transmit it to third parties as (verifiable) presentations [5; 11; 14; 15]: a selection of assertions (derived) from one or more certificates [11; 15], typically bound to (a key of) the holder – to prove their legitimate possession of the certificate in question [15]. The third party service provider, who receives a (verifiable) presentation from a principal, and performs verification to assess its authenticity, is called the verifier [11; 14; 15; 29], or relying party – because it trusts and relies on a system of digital identity solutions to confirm the validity of the assertions [11; 16; 20; 28; 29].
1.3 An anatomy of attestations
Certificates typically consist of two distinct types of information. First, they include the actual content, as one or more assertions: claims made by the issuer about the subject, associating it with certain attribute values [3; 11; 14; 16; 45]. Second, it contains attestation information: evidential (meta-)statements that describe the authentication context, consisting of all the “information that the relying party can require before it makes [a] decision with respect to an authentication” [3]. This includes the verification and revocation methods, level of assurance, security characteristics, and all the (cryptographic) verification factors needed to “verify the security of cryptographically protected information” [11]. These factors bind the asserted claims together, to the issuer and – in case of credentials – to the principal [4].
It is often assumed that “the holder of a credential … presenting the claims [to] the verifier is (controlled by) the subject of the claims” [46]. Some sources note certain exceptions, in which “claims in a credential can be about different subjects” [11] – not limited to assertions about the principal presenting them – or in which the holder is not a subject of the claims at all (e.g., a parent presenting their child’s birth certificate) [46]. These cases are in essence an abstraction from credentials towards certificates in general. They often lack technical support, though, because relying parties typically require information about the user they actually interact with during a session. Without this guarantee, malicious parties can abuse credentials (e.g., obtained through a data leak or other security breach) to impersonate someone else [46]. These scenarios underlie the idea of ‘legitimate’ holders of a credential, often – but not always – the subject.
Credentials are therefore often tied to their (legitimate) holder, through a mechanism called holder binding: the addition of holder-related verification factors. This additional attestation information serves as evidence, to be corroborated with information known to be in control of the legitimate holder. It can be something the holder has or knows, or something they are or (typically) do [39], including traditional accounts (e.g., password-based), cryptographic material (e.g., keys), biometric information (e.g., fingerprints), or any other claim linking the credential in question to information (i.e., another credential) of which the holder’s identity has already been established [14; 46]. Nevertheless, even OpenID – whose specifications indeed preclude scenarios in which holder and subject differ (cf. Section 2.2) – admits that it is “important to distinguish between the information that the credential holds (about the subject) and the information that the credential is bound to (about the holder)” [46]. This makes it hard to maintain a useful distinction between credentials and certificates – not unlike between identities and identifiers.
The distinction becomes even more blurry when considering that identification does not necessarily need to be unique. In a large amount of use cases, it suffices that the relying party knows the principal’s partial identity.24 This is also apparent in the strong increase of role-based access control mechanisms over (purely) identity-based ones, amongst others in many cloud services (e.g., Google Cloud, Amazon Web Services).25 Entities are then issued a role certificate – in place of individual identifiers [19] – which only asserts that the entity is “[a] member of the set of [entities] that have identities that are assigned to the same role” [20]. One could even go so far as to say that roles are (partial) identities, i.e., sets of attributes shared by different entities [48]. Which roles or (partial) identities are distinguished depends entirely on “the context of a function delivered by a particular application” [49].
When we take all these nuances into account, the subject becomes of less and less importance. As a concrete example, take the sale of a property. To be able to sell it, the owners should at least provide a credential (i.e., the deed) attesting that they – as a subject – are indeed it’s legitimate proprietors. A number of other relevant documents, however, should – while definitely bound to the property – not necessarily mention them (e.g., energy ratings, attestations of soil composition). Both the deed and the other documents will have to be transferred to the notary, their information verified at the relevant institutions, and subsequently processed into a bill of sale, signed by all parties involved, and passed onto the buyer. While the presence or absence of the holder–subject thus forms a single (theoretical) difference between credentials and certificates, the (practical) similarities are much more numerous. We therefore conclude that – at least from a practical perspective – there is no useful distinction between credentials and (other) certificates. Both are a “document attesting to the truth of certain stated facts” [33].
2 OpenID’s architecture for EUDI
We now consider how this more elaborate understanding of the involved concepts – in particular the realization that identity and credentials can be any (certified) data – impacts the EUDI framework and its OpenID architecture. We start with a brief technical refresher.
The basis of the entire OpenID ecosystem is the OAuth 2.0 Authorization Framework [50], which moves the responsibility for access control from the resource server (RS) to a separate authorization server (AS). The latter provides clients (applications) of the former with access tokens, which enable access to protected resources, in exchange for a variety of grants: credentials that represent the resource owner’s approval of the requested access (e.g., client-specific credentials, interactively obtained codes). This exchange happens at the (singular) token endpoint, where the client authenticates itself, presents its grant, and requests a certain resource scope [50]. The OAuth 2.0 authorization server is therefore both an authority and a relying party: it issues tokens, but verifies grants. In terms of identity, it exchanges a client’s proper identity – linked to the grant – for a partial one: the token, which merely asserts that the client has a certain authorization. Note that the input (grants) can thus be a lot more complex than the output (tokens). This is also apparant in the variety of grant-related extensions to the protocol.26
2.1 OpenID Connect
All OpenID’s architectures (OIDC, OpenID4VCI, OpenID4VP) are layered on top of this OAuth 2.0 design, reusing its authorization primitives (flows, endpoints, tokens etc.) for authentication purposes.27 OIDC repurposes OAuth 2.0’s interactive authorization code flow,28 in which the principal authenticates themselves, and returns an identity token (ID token) – containing a subject identifier – and optionally an access token, which enables the client to retrieve (additional) identity information from the identity provider’s userinfo endpoint [3].
By issuing ID tokens from the same token endpoint as access tokens, however, OIDC had to shoehorn much more complex information – any form of identity data – into an API that was designed for a much simpler (yes-or-no) output. While the extra userinfo endpoint seems to accommodate for this added complexity, its interface only allows for a flat key-value mapping (from attribute names to JSON values). Moreover, specific claims from this mapping are requested using a non-standardized query language – best described as a custom version of JSONPath [53], or a primitive flavor of JSON Schema [3]. Even in their standardized form, these JSON-based languages are structurally coupled to the JSON value tree, and thus to a particular credential format, rather than to the semantic content of the attestation. This limits the (re)usability of a query, since it “imposes an implicit reliance on … the issuer’s local context, such as language and culture” [54]. It is no surprise then, that – despite the extensibility of OIDC’s set of claims – implementations typically only rely on the subject identifier, or at most a limited selection of standard claims (name, email, address, phone number, birth date, gender).
2.2 OpenID for Verifiable Credential Issuance
In OpenID4VCI, identity information is no longer provided as an ID token, but the essence remains the same: clients can request information as a credential using an OAuth 2.0 flow,29 and access it at the credential endpoint – replacing the userinfo one [14]. Nevertheless, EUDI implementers claim that its design has “several essential benefits” for the ecosystem: it enhances trust, security, privacy, control, compliance, and interoperability. As underlying reason for these benefits, they state that OpenID4VCI combines the well-known flows and straightforward user experience of OIDC with the ‘new’ digital proofs technology of verifiable credentials – which they claim to be harder to fake or alter [57].
Practical attempts tell a different story, though. VCs indeed provide a semantically richer alignment of credentials, but they are not inherently more expressive, nor more secure, than classic OIDC tokens. To the contrary, the OpenID4VCI specification itself lists JSON Web Tokens (JWTs) with JSON Web Signatures (JWS) [58; 59] – similar to an OIDC token – as a possible serialization of VCs [14].
Moreover, while the query mechanics got a slight upgrade, it remains focused on a narrow concept of credentials. Issuers specify a number of preset credential configurations in their metadata: particular combinations of a credential type with a credential format (e.g., “a driver’s license in ISO’s mdoc format”, “a university degree in SD-JWT format”). Concrete credentials are then the result of applying a credential configuration to a dataset of identity information [14]. Within these configurations, issuers can provide clients a decent amount of flexibility regarding the packaging (e.g., format parameters, cryptographic algorithms); yet only a single parameter addresses the actual content of the credential. The entire semantics, differentiating one credential (configuration) from another, must be expressed in a single string – the credential type. This is only practically feasible when the offered configurations are static, and limited in number.
Using claim descriptions (sets of claims path pointers), clients can select a limited number of claims out of a larger credential (configuration) [14], but this only adds the ability to request subsets of the predefined credential types offered by the issuer. As such, it is no improvement over OIDC’s custom flavor of JSONPath or JSON Schema (cf. supra) – yet is not interoperable with any of them, nor compatible with any existing OpenID client. Moreover, this approach makes the semantics of a credential dependent on the structure of the format – and assumes that the client already knows this structure. OpenID4VCI is thus only suited for issuers offering a select assortment of distinct bundles of information, with a fixed semantics agreed upon out-of-band.
The design of OpenID4VCI also precludes the issuance of credentials with subjects other than the principal. In scenarios where the client is not yet known – i.e., all cases except active offers by the issuer or updates with refresh tokens – the principal must (interactively) identify themself. Since every other step of the protocol is based on subject-agnostic credential (configuration) identifiers, there is no other way for the issuer to know about which subject a credential is requested, i.e., to which dataset to apply the credential configuration. Not only does this severely limit the kind of credentials that can be issued through OpenID4VCI; the need for interaction also complicates the automation of credential issuance. Taken together, the limitations described in this section lead us to conclude that OpenID4VCI is mainly targeted at the issuance of a handful of specific credentials, actively offered to (the client of) a known principal.30
2.3 OpenID for Verifiable Credential Presentations
Credentials issued via OpenID4VCI are not meant to be requested by – or transmitted to – verifiers themselves. Instead, the EUDI framework defines an extra intermediary trust service: a digital wallet, which requests credentials from issuers, and presents them to verifiers [7]. OpenID specifies the design of these wallets in OpenID4VP [15]. Unsurprisingly, it suffers from many of the same issues as OIDC and OpenID4VCI – without offering any practical innovations that would warrant a new specification.
Even more than in those other specifications, the architecture of OpenID4VP forces the wallet to be both authorization server and resource server. This precludes more flexible scenarios, in which these roles are federated or otherwise decentralized. Moreover, while it claims to provide enhanced security to the verification mechanism – and thereby to amplify the credibility of digital authentication procedures [57] – the specification seems to double down on a variant of the insecure OAuth 2.0 implicit authorization flow, which has since a long time been deprecated [51], and is no longer an option in OAuth 2.1 [52] (cf. the notes on OIDC and on OpenID4VCI).
Similar to the OIDC flow, a successful OpenID4VP request results in a response containing one or more credentials – now called VP tokens instead of ID tokens. To request and construct these (verifiable) presentations from the credentials available to the wallet, OpenID4VP includes a custom Digital Credentials Query Language (DCQL). There is not much more to this language than some metadata around the JSONPath style claims descriptions of OpenID4VCI: sets of claims path pointers indicating the desired claims by their structural location in a known credential type (predefined out-of-band). Rather than achieving the claimed ‘comprehensive interoperability’ [57], the custom interfaces of the specification thus preclude a straightforward integration between conventional and contemporary OpenID technologies – let alone other identity management frameworks.
3 A paradigm shift that never was
Given the strong parallel between OpenID4VP and OpenID4VCI, highlighted in the previous section, one must wonder about the reasons for having two separate specifications that regulate almost identical flows of information. After all, (verifiable) credentials are (verifiable) presentations themselves. In OIDC, for example, relying parties get the identity token (i.e., presentation) of a principal directly from the latter’s issuer.
The similarity becomes even more apparent when taking into account self-issued credentials. Extending OIDC, the Self-issued OpenID provider (SIOPv2) [60] specification defines an OpenID provider (i.e., issuer) controlled by the principal – either in the cloud or on their device, similar to a wallet – such that “the [principal] becomes the issuer of identity information [i.e., tokens], signed with keys under [their] control [in order to] present self-attested claims directly to the [relying party]” [60].31 Though the relying party’s trust relationship in SIOP is directly with the principal, rather than with a third-party issuer, from a technical perspective OIDC and SIOPv2 are almost identical: only the equality of the sub and iss claims indicates the difference.32 However, while OIDC and SIOPv2 employ a single, uniform protocol (OIDC), on top of which self-issuance is made possible, OpenID4VCI and OpenID4VP are – while functionally the same – specified separately. The question therefore remains: what are the advantages of such a double tiered framework?
In a white paper, titled [61], OpenID calls its own architecture constitutive of a paradigm shift, driven by an evolution in user-centricity: the principal is put “in the center of the exchange between the verifier and the credential issuer” [61],33 granting them more control, privacy, and portability of their identity information. This ‘big shift’ must be understood against the background of ‘traditional’ federated models.34 In such systems, an issuer – in a trust relationship with the verifier – provides credentials just-in-time, i.e., each time a principal requires one for interacting with a relying party. OpenID’s new architecture figures in the trend of decentralization,35 which they contrast with federation. This is surprising, since federation is actually one of two forms of decentralization – the other one being distributed (peer-to-peer) models [63].36 Since the OpenID ecosystem is not a fully distributed one, we look into the benefits they attribute to their ‘beyond federated’ decentralized approach, and how these benefits contribute to the claimed increase in control, privacy, and portability.
3.1 Portability
As OpenID correctly states in their white paper, portability of identity information increases in a decentralized system, because principals can use their own identifier(s) at issuers and verifiers, instead of one namespaced to a specific third-party issuer and assigned to them [61]. However, this Bring Your Own Identity (BYOI) concept is already a cornerstone of ‘traditional’ federated models – largely popularized by OIDC itself. The question therefore remains: how does OpenID move beyond this in their new architecture, and what do users gain by it?
According to OpenID’s white paper, the increase in portability consists of the principal’s ability to “control their relationship with the verifiers independent from third party [issuers’] decisions or lifespan,” and therefore to “present credentials to the relying parties who do not have a federated relationship with the credential issuer” [61]. This is a strong claim, for which the white paper provides no support, nor any use case in which it would be required. In fact, in the same text, OpenID writes that “verifiers need to trust the respective credential issuer,” and that the establishment of such cross-domain, inter-organizational trust will require “regulatory or contractual relationships on top of technical interoperability” [61]. It is precisely this legally and technically supported trust relationship which constitutes a federation; and OpenID’s new architecture relies on it just as much as any other federated model.
If anything, the OpenID4VP specification increases portability in a more literal sense: storing credentials in a wallet may ensure their offline availability in case of technical problems at the issuer’s side, or – with an on-device wallet – in case of general internet failure. Then again, such functionality could also be implemented through existing specifications for asynchronous federation, e.g., OIDC’s Claims Aggregation extension [67]. None of the aspects of increased portability discussed in this section can therefore truly be attributed to OpenID’s new design.
3.2 Control
According to OpenID’s white paper, from a control perspective ‘decentralization’ means “not depending on one single body controlling … the ecosystem,” and thus enabling principals and other parties to make critical decisions, e.g., “from which [issuer] to obtain what credential,” and “when to disclose which credential to which verifier” [61]. Apart from the promised increase in portability (cf. supra), the specifications seem to rely predominantly on informed consent and selective disclosure to check this box.
Informed consent is indeed a cornerstone of privacy-aware technology. Verifiers and wallets should make sure that the context of a request – including a sufficiently specific purpose – is clear to the principal, and should obtain the latter’s consent – through explicit interaction – before disclosing information [15]. Again, however, this is nothing new – OIDC already stresses its importance. Neither does it answer the question concerning the split architecture, since both OpenID4VP and OpenID4VCI require consent [14; 15] – thereby redundantly overloading the user experience worse, contrary to their own claims (cf. Section 2.2).
Selective disclosure is an interesting feature. This data minimization technique – supported by multiple credential formats – enables principals to select specific claims from a credential in their wallet, creating a presentation that only discloses the selected information, without revealing the rest of the credential to the verifier [15]. This technique drastically improves the control of principals in scenarios in which credentials contain more claims than strictly required. It would be a strong advantage of OpenID’s new design, were it not that their architecture is also the main cause of such scenarios in the first place. By splitting the wallet from the issuer – issuing ‘reusable’ credentials from which presentations for multiple verifiers can be created – it indeed becomes necessary for the wallet to filter which claims are disclosed to which verifiers. Other models, like OIDC, do not have this issue to begin with: the issuer creates a new credential (presentation) for each request, tailored to the verifier.
Neither informed consent, nor selective disclosure therefore add a real advantage to OpenID’s approach to decentralization; especially not when compared to (other) federated models. With respect to control, this lack of technological progress is not core issue, though, since the regulations themselves do not allow for more (cf. Section 4.3).
3.3 Privacy
The increase in privacy claimed by OpenID is supposedly due to the principal’s ability to “directly present identity information to the relying parties,” who can “receive and validate presented credentials without [either the principal or the verifier] directly interacting with the issuer” [61]. In their white paper, they say this ‘most notable feature’ mimics physical credentials37, since “[issuers] no longer know what activity [principals] are performing at which relying party” [61]. In particular, they claim that “scenarios where the [issuer] has no legitimate reason to know which [relying party] the user wants to access resources from and when they do so” are not achievable with (other) federated flows [61].38
Again, these are strong statements. The principal can indeed present credentials directly to the relying party; but this is technically not different from the OIDC authorization code flow: the ID token passes from the issuer, through the user agent – in this case a browser instead of a wallet – to the verifier. Importantly, however, in both architectures, both the principal and the verifier have to interact with the issuer.
First, the principal must at some point retrieve the credential from the issuer, to store it in their wallet (or browser memory). In theory, this could indeed happen long before the participant presents the information to the relying party. However, OpenID’s specifications stress that for privacy considerations, wallets “should not store credentials longer than needed” [14]. In fact, since “presentation sessions … can be linked on the basis of unique values encoded in the credential,” wallets are advised to use “a unique credential per presentation or per verifier” [14] to avoid such correlation – “each with unique issuer signatures [and] keys” – and then discarding the credential [15].39
Second, in order to verify the issuer’s signature, the verifier will have to contact the issuer upon receiving each presentation. While it is possible to cache some of the issuer’s verification material (e.g., public keys), this is equally true of (other) federated systems like OIDC. Moreover, caching becomes useless in case the issuer and wallet follow the advise to use a unique key per credentials and a unique credential per presentation (cf. supra).
Finally, even if OpenID’s architecture would successfully prevent issuers from learning about the user’s activity at relying parties, it would have achieved this by introducing a new intermediary entity – the wallet – who will possess the same information instead. Whether this is desirable or not will depend on the context. While the design of OpenID4VCI and OpenID4VP can somewhat reduce the frequency of direct, synchronous interactions, it is therefore fair to say that OpenID’s strong privacy claims are at least an exaggeration. To the contrary, the considerations regarding the risk of correlation – emphasized in the specifications themselves – make it more plausible that the proposed flows are in fact not desirable for privacy after all.
In this section we critically assessed each of the benefits proclaimed by OpenID to constitute a user-centric paradigm shift in digital identity. We conclude that their claims about control, privacy, and portability are either plainly incorrect, exaggerated, or already present in (other) federated solutions. Moreover, we pointed out that some of their so-called ‘strong points’ in fact hardly make up for certain less desirable effects of the new architecture.
That OpenID’s claims do not survive adequate scrutiny also calls into question whether their specifications are truly a good choice for implementing the EUDI regulations. As such, there is nothing wrong with a specification that is narrowly tailored to handful of specific use cases: even with all its limitations and vulnerabilities taken into account, OpenID’s architecture still supports most traditional scenarios of credential exchange. However, as a foothold for the EUDI infrastructure, their design drastically limits the potential of the EU’s strategy, both in its limited capabilities and in its lack of – backwards compatible – options for evolution.
4 The politics of control
Having established OpenID’s inability to live up to its paradigm-shifting promises, we take a look at a number of choices made by the EU’s regulations themselves. From their promotion materials, to the regulations’ recitals, it is clear that the EU’s vision is also full of big promises: “Union citizens and residents in the Union should have the right to a digital identity that is under their sole control” [7, p. art.3]. Wallets aim to give those citizens “full control on what data they share to identify themselves with online services … at all times” [68]. Users will also be able to share digital documents,40 and “prove statements [i.e., specific personal attributes] about themselves and their relationships with anonymity (i.e. without revealing identifying data)” [68]. In other words, the EUDI regulation promises far-reaching self-sovereign identity (SSI): a digital identity model in which each individual controls who they are on the Web, i.e., what information they are associated with when interacting with online services.41
Note that the SSI model does not imply ownership of the identity information. The data itself can originate with another party, and made available to the individual; the latter thus does not control the availability of the information. This is echoed in the OpenID trust model, which emphasizes that “it is still up to the verifier to decide whether to accept those credentials [and] it is still up to the issuer to decide whether to issue the credential to the [individual] in the first place [or] to revoke and invalidate the credential” [61]. The sovereignty of the individual therefore lies in their power to autonomously control who can access which of the available information under which circumstances. Even with an ideal technological implementation, however, it remains to be seen whether the legal requirements, formulated in the EUDI regulation, are actually compatible with such a form of self-sovereignty. In the following sections, we will look into the concept of trusted lists, and why they are both a necessity and a risk for internet freedom and security.
4.1 Trusted lists
Proposed by the European Telecommunications Standards Institute (ETSI) [78], trusted lists are perhaps the most pervasive regulatory device introduced in eIDAS, and – in particular – enforced in EUDI. These lists, published and maintained by Trusted List Providers (TLPs), contain the trust anchors (i.e., identifiers and public keys) of regulated Trust Service Providers (TSPs). The trust services, which these entities provide, include a variety of digital identity solutions [28]: (electronic) services involved in authentication processes; e.g., the issuance, validation, and verification of signed attestations (certificates), as well as their life cycle management, and that of their verification material (e.g., timestamps, ledgers, signatures, and seals) [6; 8].42
In particular, the European Commission maintains trusted lists of different types of TSPs – including wallets, attestation providers, signature services, and other certification authorities – that are granted the qualified status,43 and are thereby approved to provide qualified trust services [8].44 To be put on a these lists, TSPs need to adhere to certain requirements [7, Annex V], and be registered by a registrar: a supervisory body of their Member State, which in turn notifies the European Commission [82; 83]. In the case of attestation providers, this registration has to be renewed on every change in the issued credentials, since the registered data includes “the attestation type(s) that the provider intends to issue to wallet units” [8].
Surprisingly, this pervasive aspect of the regulation did not meet much general opposition. One specific use of trusted lists, however, caused an intense debate between the legislature and the internet community – in particular with browser vendors and advocates for net-neutrality. The topic under discussion was the qualified website authentication certificate (QWAC): an electronic attestation, in the form of a TLS certificate45 – or an attribute certificate cryptographically linked to one (ac-QWAC) [84] – and issued to EU-based, audited TSPs that comply with a number of “minimal security and liability obligations” [7].
Supported by a report from the European Union Agency for Cybersecurity (ENISA46), on critical improvements for website authentication [85], QWACs were introduced in eIDAS as a voluntary choice for websites. Their intent was to “provide a means by which a visitor to a website can be assured that there is a genuine and legitimate entity standing behind the website,” and as such to “contribute to the building of trust and confidence in conducting business online” [6]. The regulation only became an issue after a crucial amendment in EUDI [7, art. 45], requiring browsers to treat QWAC providers as root CAs – and thus recognize and display these government-approved certificates to their users as an assurance of trustworthy services – regardless of the well-established security measures traditionally upheld by the browsers themselves [86]. In the following sections, we explore the two main consequences of this decision, that caused the worldwide criticism against it.47
4.2 A weakened internet security
TLS certificates can validate several characteristics of an internet connection, including the domain name and organization linked to the website. Like with any certificate, trust in TLS verification is dependent on trust in the CA vouching for it. Since users can hardly be expected to know which of the many CA’s to trust on the Web, browser vendors take up this burden by maintaining a ‘trusted list’ of vetted CAs, typically based on advice of the Certification Authorities and Browsers (CA/B) Forum [87].48 In particular, until the early 2020s, websites complying with a strict set of Extended Validation (EV) rules – compiled by this CA/B Forum – were displayed with a ‘green shield’ indicator in the address bar of many browsers, to indicate this to the user.
The eIDAS certificates, and EUDI’s obligation to display them in browsers, copy this design, but rely on government certification instead of a multi-stakeholder CA architecture. However, while EUDI literally states that “the results of existing industry-led initiatives, for example the [CA/B] Forum, have been taken into account” [6], the CA/B community – including most major browsers – had in fact already deprecated or discontinued the EV system by the time EUDI was drafted. They moved away from this practice after research by Google and UC Berkeley had shown that the indicators did not have a significant effect on the behavior of users: they “[did not] provide users with clear, actionable cues about online trust, and were therefore adding cost and complexity but little or no benefit” [87].
The cost–benefit analysis was, however, not the reason for the strong reaction which the regulation triggered. Responding to the proposal’s feedback rounds [70; 71], the Common Certification Authority Database (CCADB) [89] – an initiative of the Linux Foundation – together with major browsers, argued that mandating QWACs undermines technical neutrality, interoperability, and user privacy – principles that are central to the intent of eIDAS itself. Similar objections were later raised in an impact brief from the Internet Society (ISOC), which claimed that “ETSI’s assumption that browsers would add eIDAS-approved [TSPs] to the trusted root list violates [the ideal of an open, trustworthy Internet],” based on principles of collaboration, expertise, transparency, and consensus around “trust criteria mutually agreed among … relevant stakeholders” [87].
In a position paper [88], reiterating the objections of their earlier response [90], Mozilla called the new regulation an ‘unprecedented move’, which “will amount to forced website certificate whitelisting by government dictate and will irremediably harm users’ safety and security” [88; 90]. They emphasized that the proposal “goes against established best practices … created by consensus from the varied experiences of the Internet’s explosive growth [which] have successfully helped keep the Web secure for the past two decades” [90]. The obligation for browser vendors to automatically include TSPs in their root programs, would effectively “replace the security expertise of major browser companies … with legislation premised on weaker and discredited security architectures,” leading to “a regression in the security assurances that users have come to expect from their browsers” [88]. The requirements for inclusion in, for example, Mozilla’s root program, are more rigorous, and more transparent, providing for more public oversight, and more stringent audits than eIDAS’ criteria for TSPs [88]. Therefore, “by mandating that TSPs be supported by browsers in general, and in particular when they fail to meet the security and audit criteria of [the browsers’] root program, [EUDI] will negatively transform the website security ecosystem in a fundamental way” [88].
According to ISOC’s impact brief, these negative effects come in two forms: by issuing incorrect certificates, and through inability to rapidly address security incidents [87]. This risk assessment is echoed in an open letter published by the Electronic Frontier Foundation (EFF), signed by numerous cybersecurity researchers, advocates, and practitioners: “allowing some website certificates to bypass existing security standards … increases the risk that insecure or malicious certificates will be issued … and [makes] it impossible for the cybersecurity community to quickly respond when certificates are found to pose a risk” [91]. In general, the EFF characterizes the regulation as “a dangerous cybersecurity policy trend,” which goes against established norms in cybersecurity and risk management, and “compels private actors to forgo their duty to those who use their products and services, by assuming that because government-appointed [CAs] are subject to government security standards, they can pose no cybersecurity risk” [91].
4.3 (Re)centralized (dis)trust
The trusted lists concerning website authentication share a lot of similarities with their counterparts for providers of wallets, attestations, and signatures – in fact, with any kind of eIDAS/EUDI lists.49 In general, each of these trusted lists institutionalizes an elevated status, granted by national government registrars to a select group of TSPs [83]. This entails both economical and political risks.
The EUDI design allows only registered systems – abiding by the criteria set forth in the regulations – to participate in the EUDI ecosystem, and interact with other parties via the architecture’s protocols [93]. Therefore, “those who are on the list receive economic advantages; those who are not on it have disadvantages” [94]. In itself, this is not an issue. However, several of the requirements can form economical obstacles (e.g., increased liability, financial minima) or technical hurdles (e.g., renewal for each changed attestation type) [83; 93]. This could lead to an ecosystem that favors affluent organizations, strengthening their privileged position in their respective techno-economic spheres, which increases risks like vendor lock-ins and gate-keeper behavior [94]. Rather than a decentralization of digital identity, such an ecosystem could therefore result in a (re)centralization instead, in which authentication on the Web is directed by a small number of key economical and political actors.
Importantly, this not only holds for the TSPs themselves, but also for relying parties making use of their services [95]. To obtain even the simplest attestation from a wallet, a relying party would need to jump through the hoops described above, regardless of the choice of the wallet user. This conclusion definitely falls short of the promise for a ‘user-oriented identity management’ that would put users ‘in charge’ of their own personal information. Indeed, while “people can choose which bits of their identity information to share” [57], they have nothing to say about which issuer can provide their credentials, nor to which verifiers they can present their attributes. As such, wallets merely provide a uniform interface through which only registered parties – willing to pay the cost of entry – can exchange information.
Moreover, many of the implementing regulations require TSPs to log and preserve information about their systems and the parties they interact with [96; 97; 98]. In certain cases (e.g., security breaches), TSPs have the obligation to notify the European Commission – upon which the latter can decide to suspend the service/provider from the ecosystem. These requirements are not limited to information about TSPs, wallets, and relying parties: in some cases – e.g., identity matching by cross-border services – personal information about the wallet user, or subject of authentication, must also be logged.
Given the EUDI’s possible tendency towards (re)centralization (cf. supra), these increased logging requirements could pose an additional risk of abuse; e.g., monitoring the usage of wallets to profile citizens’ behavior and interests – either by malicious TSPs themselves, or as a target of external actors. Such risks not only negate OpenID’s emphasis on minimizing unnecessary data disclosure [14; 15; 57]; they also stand in sharp contrast with other EU legislation, including data protection regulations like the GDPR, and the EUDI regulation’s own insistence on unlinkability. The latter is particularly surprising, since it is part of the same regulation, and specifically aims to prevent any party to bring together “data that allows for tracking, linking, correlating or otherwise obtain knowledge of transactions or user behavior unless explicitly authorized by the user” [7].50 This is precisely the tension pointed out by the expert group for Legal and Security Issues (LSI) of the Council of European Professional Informatics Societies (CEPIS): “Unfortunately the development of the Architecture Reference Framework (ARF) … is not transparent, but behind closed doors, and available drafts do not support the legal requirements for safeguards for unlinkability” [86].
Most importantly, however, the economical and political design of EUDI, as described above, allows governments to whitelist favored service providers, without much restriction. CEPIS criticizes that there are “no safeguards that prevent the governments … from exercising surveillance over everything its users do with it” [86]. It is important to realize that these scenarios are not ‘horror stories’, but rather “deductions from already known types of attack, economic incentives and … historical experience” [94]. Organizations managing CA root programs, like Mozilla, have first hand experience with these dangers. They list, amongst others, the governments of Mauritius, Kazakhstan, and China, as “authoritarian regimes [who] have long sought to override [trusted list] policies” [88].
Note that none of the critics referred to above are against the concept of trusted lists. In fact, they are an architectural necessity in any decentralized system. At the same time, however, they are an instrument of power, which should urge us to ask questions like: “In whose hands does this tool fall?”, “How strong are the barriers against misappropriation, commercialization, and criminal exploitation?”, “How is abuse prevented if political or economic interests are involved?” [94]. After all, “a globally connected Internet is premised on the ability of Internet users to access and use resources in other networks without unnecessary restrictions” [87]. Giving political actors the power to directly influence such restrictions – either by imposing them or by overruling them – sets a dangerous antecedent.
5 Alternative solutions
In this final section, we highlight some of the alternative solutions already mentioned earlier, and propose a number of additional lines of research worth looking into. Since immediate legislative changes are improbable, we focus on technological specifications that might overcome some of the issues we raised, and pave the way towards a truly self-sovereign digital authentication framework.
To achieve this, we argue that solutions should be aimed at a broad, explicitly defined interpretation of authentication and identity, as put forward in Section 1.1 – or should at least provide sufficient points of extensibility to enable probable evolution paths towards it. The core principles of the semantic Web are a good starting point: global identifiers (i.e., URIs, and in particular DIDs [10]), combined with semantically rich structures like RDF [100], allow for a far-reaching interoperability with existing and future technologies.
Based on this strong foundation, W3C’s models of verifiable credentials and presentations [11] – also supported by OpenID – offer the most potential for aligning the wide variety of electronic attestations of attributes. As we pointed out in Section 1.2, they are not inherently more expressive or secure than classic OIDC tokens, but they achieve those features in a semantically richer, more interoperable, and extensible manner. On the other hand, while W3C VCs can handle partial identity (e.g., roles, pseudonyms [101]), they are still targeted to a subject, and can therefore cannot express certificates in the broader sense. A compatible model for such general ‘attestation documents’ therefore remains a crucial topic for future research.
Throughout Section 2, we already mentioned several other OpenID specifications that together are functionally equivalent to OpenID4VCI and OpenID4VP. Based on the well-established OIDC [3], SIOPv2 aligns credentials with self-issued presentations [60], OIDC4IDA adds a levels of assurance model [102; 103; 104], and OIDC Claims Aggregation can substitute for wallet-like behavior [67]. Contrary to OpenID4VCI and OpenID4VP, the latter specifications rely on existing standards where possible, and provide strong extensibility and interoperability – both between themselves and with external specifications.
However, we also pointed out several problems with the OpenID ecosystem in general, which should be addressed in any possible alternative. In particular, attention needs to be payed to best practices in internet security and software design – such as the separation of orthogonal concerns. Ideally, an alternative solution would therefore be based directly on OAuth 2.1 [52]. This would lift OpenID’s restriction to a single endpoint, and instead allow the immense variety of attestations to be exchanged via any kind of interface: classic RESTful APIs, database queries, web streams etc.
To access such heterogeneous forms of attestations, however, another issue needs to be addressed first. In Section 2.2 and Section 2.3, we criticized the inadequacy of a single string to express the entire semantics of each type of credential – even for the limited variety exchangeable through OpenID wallets. We also pointed out the issues with OpenID’s several custom, non-interoperable query languages (i.e., DCQL and claim descriptions). Within the restrictions of OpenID, the JSONPath or JSON Schema specifications [53; 105] offer a more standardized, interoperable alternative. However, as we have pointed out in Section 2.1, even a standardized approach based on JSON has a limited flexibility and interoperability in the light of a more heterogeneous range of credentials. The suggested addition of SPARQL queries [106], to request specific claims from the combined credentials in a wallet, would already make a big difference [54]. In order to also include other data interfaces, research into a more abstract query (meta-)language is necessary.
A choice for OAuth would also open up the possibility of using its User-Managed Access (UMA) extension [107; 108]. By modeling a dynamic negotiation with the verifier, UMA enables more complex authorization contexts to be established, thereby lifting OpenID4VCI’s limitation to static, preset credential configurations (cf. Section 2.2). UMA also emphasizes asynchronous interaction with the principal, which opens up the way for use cases involving automation. In earlier work, we already discussed UMA in more detail as an alternative to approaches involving OIDC and access control lists [109]. We also provided a profile specification for UMA, called Authorization for Data Spaces (A4DS) [110], which puts forward an authorization model in which wallets let users regulate access to data that remains safely at its original source – functioning more like remote keys than like portable hard drives.
Based on the pointers in this section, one other interesting project is the Grant Negotiation and Authorization Protocol (GNAP) [111]. This draft specification – sometimes hailed as ‘OAuth 3.0’ – combines two decades of best practices around Oauth 2.0 with an OIDC-like identity provider and an authorization model inspired by UMA. It is important to keep in mind, though, that neither GNAP, nor any other alternative, will be able to actually fulfill the promise of a decentralized, self-sovereign European identity framework, as long as the risks in Section 4 – caused by the economic and political impact of trusted lists (cf. Section 4.1) – are not addressed. Only through transparent collaboration with experts and stakeholders can the technological potential of these solutions be realized.
Summary and conclusion
We started from the observation that, in legal and technical sources pertaining to identity in data spaces – in particular those related to recent EU regulations – the term ‘identity’ is ill-defined. Aggregating several older international standards’ glossaries, in Section 1.1 we settled on a workable definition: an identity is a subset of an entity’s characteristics, sufficient to represent and recognize that entity. We also clarify the concept ‘authentication’, which is the process of determining the authenticity of information, by verifying its origin and integrity, often through a (cryptographic) binding between multiple claims. In Section 1.2, we explained the roles of these concepts in the trust relations of the issuer-holder-verifier model. We also showed how practically all verifiable information – any certificate – can be identity information (i.e., a credential); in particular in the light of context-dependent (partial) identities like roles, pseudonyms, and anonymity.
Applying our findings to OpenID’s architecture for the EUDI framework, in Section 2 we exposed several issues with OIDC, OpenID4VCI, and OpenID4VP, indicating a mismatch between a general conceptualization of identity and the limited capabilities of those specifications. Implementing the EUDI framework in an OIDC-based architecture precludes a healthy separation of concerns, leaves the door open for insecure practices, and immediately limits the flexibility of the provided interfaces (cf. Section 2.1). While OpenID4VCI and OpenID4VP replace identity tokens with credentials, their reliance on preset configurations, lack of expressivity in credential types, limited query language, and format-dependent claim semantics drastically narrow the use cases to which they can be applied. Furthermore, since their credentials must have an (interactively) identified subject, the possibilities for automated scenarios are limited (cf. Section 2.2 and Section 2.2).
In Section 3, we discussed the strong parallel between OpenID4VP and OpenID4VCI, comparing them to self-issued credentials in SIOPv2 (OIDC), and self-asserted claims in W3C’s Verifiable Presentations. We find that the latter specifications build direct, holder-based issuance on top of – and compatible with – their respective third-party issuance flows, while the former specifications provides similar functionally through almost identical yet incompatible mechanisms. Looking for the rationale behind this double tiered design, we discussed the paradigm shift – claimed by OpenID – away from federated models and towards user-centricity. We refuted its claimed increase in user control, privacy, and portability of identity information; and concluded that the OpenID trust model in no aspect goes beyond existing decentralized approaches:
While an increase in offline functionality could be attributed to OpenID’s new architecture, the BYOI portability it proclaims is not a real innovation, since this benefit is already present in many other decentralized systems – including OpenID’s own OIDC [3.1]. The same goes for the facilitation of informed consent to bolster people’s control over their personal information.
Selective disclosure, on the other hand – providing fine-grained control over individual claims in larger credentials – predominantly applies to scenarios that arise from OpenID’s design itself. In other decentralized models, this form of data minimization is much less of a necessity [3.2].
The introduction of a user-bound wallet also does not provide the promised increase in privacy. OpenID provides no reason why a wallet provider would protect the private information in credentials better than their issuer. Moreover, this new intermediary does not truly make direct interaction between issuers and verifiers redundant – contrary to OpenID’s claims. In fact, when taking into concern OpenID’s own privacy considerations, the resulting flow is practically identical to OIDC [3.3].
From this analysis, we concluded that OpenID’s specifications offer a technological framework that is tailored to a specific set use cases – including most classic scenarios of credential exchange – without necessarily offering more than other decentralized identity models. Since this limits the capabilities and interoperability of the ecosystem, we called into question whether OpenID is truly a good choice as foothold for EUDI.
In Section 4, we looked into a number of legislative choices of the European regulations themselves, and discussed their impact on the EU’s promise of self-sovereign identity. In Section 4.1, we explained the introduction of trusted lists, assurance levels, and qualified service providers; as well as their increasingly strong constraints – and ties to national governments – through the EUDI’s amendments to the eIDAS regulation. As a concrete example, in Section 4.2 we highlighted the commotion in the internet community around qualified website authentication certificates – called out by many as a dangerous trend in cybersecurity policy, weakening global internet security by violating established norms on net-neutrality and privacy. Extrapolating these implications to trusted lists in general, we emphasized the risks of making participation in an advantageous market dependent on a registration procedure that forms an economical cost of entry, and is governed by political institutions. The economic and political incentives in such ecosystems not only risk a de facto recentralization of digital identity, driven by gate-keeper behavior of privileged market players; but, without sufficient safeguards, they are also vulnerable to government abuse.
Towards individual people, the regulations fall equally short of their goal. As we discussed in Section 4.3, the exclusory ecosystem severely restricts people in what they can do with their personal information. Because the ecosystem is limited to registered parties, they cannot freely choose which issuers and verifiers to interact with, even if the latter meet all technical requirements. Such an ecosystem thus hardly fulfills the promise of user-oriented identity management in which individuals are in charge. Moreover, given the regulations’ increased requirements for logging and preserving information – and disclosing it to government institutions – this tendency towards recentralization poses the additional risk of monitoring citizens’ behavior and interests, regardless of the privacy-preserving capabilities of the chosen technical framework. This stands in sharp contrast with other legal requirements (e.g., GDPR), and promises about the unlinkability of correlating data.
We wrapped up this paper in Section 5 by arguing that the risks related to EUDI will not diminish until the regulators revise the problematic use of trusted lists. In anticipation of that, we gave a number of suggestions for research into technical solutions that overcome some of the issues with the architecture of OpenID. Leveraging the extensibility and interoperability of semantic Web models (URIs, DIDs, RDF), we take W3C VC’s to be a good foundation, even though they lack support for attestation documents that are not subject-based. We repeated the functional equivalence between the new OpenID4VCI and OpenID4VP specifications on the one hand, and a combination of the already existing OIDC, SIOPv2, OIDC4IDA, and OIDC Claims Aggregation on the other. However, we advocate for an architecturally sounder framework based directly on OAuth, which has much broader applicability, especially when extended with the asynchronous and dynamic features of UMA, and their mutual evolution path through GNAP. Future research should build on these, in line with the A4DS profile, and needs to look into query (meta-)languages that are better suited to uniformly request attestations from a more heterogeneous range of providers.
We conclude that, while OpenID’s new architecture can indeed implement the limited amount of use cases which the EUDI regulations allow, it inherits an architectural history of ignoring best practices in software design, internet security, and interoperability, without offering any innovation: both the technical functionality (e.g., semantic expressivity, query languages, automation) and the non-functional features highlighted by the design (i.e., increase in control, privacy, and portability) are already achievable with equivalent decentralized solutions existing today, such as the OIDC extensions for self-issuance, identity assurance, and claims aggregation.
Furthermore, without a thorough analysis of (digital) identity, the regulations and their architecture have restricted themselves to a narrow intuition of what credential exchange can look like – thereby missing the opportunity to construct a truly uniform model of authentication on the Web. Alternatives, that aim at a broader interpretation of authentication and identity, include the combination of OAuth 2.1 with UMA, and GNAP. In either case, future research – in particular into uniform query (meta-)languages – is needed to achieve less trivial use cases.
However, even a more capable technical framework cannot surpass the restrictions imposed by the EUDI regulations themselves. The manner in which the regulators have institutionalized EUDI – in particular through the use of trusted lists – risks to create perverse economic and political incentives. Rather than building an inclusive decentralized ecosystem of self-sovereign authentication on the Web, these incentives can lead to a recentralization of digital identity, and to dangerous vulnerabilities for abuse (e.g., profiling of citizens’ behavior). In its current state, the EUDI ecosystem therefore risks to become a restriction of people’s choice and control, rather than an environment in which they feel in charge of their personal information. Instead, authentication then becomes “an entry ticket to a system in which every transaction is clearly assigned to a person” [94].
Reuse
Citation
@report{termont2026,
author = {Termont, Wouter and Esteves, Beatriz},
title = {OpenID for {European} {Digital} {Identity:} {An}
Architectural Analysis of User-Centric Identity Management in the
{EU}},
date = {2026-02-17},
url = {https://arxiv.org/abs/2601.14503},
doi = {10.48550/arXiv.2601.14503},
issn = {2331-8422},
langid = {en-US}
}
