Public Key Infrastructure (PKI), or asymmetric cryptography is a fairly complex set of technologies, people, policies, and procedures, which are used together to request, create, manage, store, distribute, and (ultimately) revoke digital certificates – binding public keys with an identity (such as any MilDep organization, physical address, personal device, or email, etc.). These certificates, e.g., X.509, are very complicated and have potential vulnerabilities. Authenticated encryption uses X.509 certificates. For example, despite advances in security engineering, authentication in applications such as email and the Web still primarily relies on the X.509 public key cryptography was introduced in 1988.

Zero Trust controls that are an adaptive approach ‘to the risk-context’ enterprise which grants least privilege access based on verifying who is requesting access, the context of the request, and the risk of the MilDep access environment ecosystem.

Let’s discuss some issues. Our conceptual Zero Trust platform is an intelligent, data-centric, micro perimeter of computational components around information and protecting it with strong encryption techniques tied to intelligent authentication. Trust is developed from many different technologies in order to provide new functionalities that improve the quality of life for the user and enables technological advances in critical areas of information centric analytics. X.509 does not apply to trust. The current DISA system needs a more precise X.509 policy upgrade.

Our Zero Trust approach requires the integration of these heterogeneous, domain-specific tools into a common co-simulation platform. The Zero Trust concept is an open-source cryptographic tool set that enables integration, thereby addressing various challenges and concerns that currently limit Navy physical network protocols and cyber resources. Our approach accelerates the development of standards and best practices for interoperability and centric ID cybersecurity. Our Zero Trust architecture creates defenses with a new cryptography initiative, which aims to develop cryptographic algorithm standards that can work within the confines of a simple electronic device that will be validated, verified and authenticated using this Zero Trust model.

In our Zero Trust platform we use cryptography that protects communications against espionage: an eavesdropper who intercepts a message will be unable to decipher it.

Cryptography also protects communications against sabotage: a forger who fabricates or modifies a message will be unable to deceive the receiver. This is useful for all types of information. Therefore, our approach aligns with fundamentals of cybersecurity: availability, confidentiality, and integrity. Compliance regulations within the DoD tend to be inherently reactive and very targeted, either by industry, geography,

or type of information that needs to be protected. These requirements often focus heavily on the confidentiality and then on the integrity of the data. Although these are certainly important issues, they do little or nothing to ensure overall resilience—that is, that mission-critical systems and data are available when the Warfighter, industry partners, and enablers need them. For example, existing methods of protecting DoD users against server compromise require users to manually verify recipients’ accounts in-person. This simply hasn’t worked. One of our goals with this Zero Trust model using new version of cryptography is the relationship between online personas and public keys should be automatically verifiable and publicly auditable. Users should be able to see all the keys that have been attached to an account, while making any attempt to tamper with the record publicly visible. This also ensures that senders will always use the same keys that account owners are verifying. This is called key and certificate transparency. Our concept suggests that Key Transparency is a general-use, transparent directory that makes it easy for developers to create systems of all kinds with independently auditable account data. It can be used in a variety of scenarios where data needs to be encrypted or authenticated. It can be used to make security features that are easy for people to understand while supporting important user needs like account recovery.

Our view of Certificate Transparency is to remedy these X.509/SSL/TLS certificate-based threats by making the issuance and existence of X.509/SSL certificates open to scrutiny by domain owners, the DISA CA, and domain users. Specifically, Certificate Transparency has three main goals:

  • Make it impossible (or at least very difficult) for a CA to issue a X.509/SSL certificate for a domain without the certificate being visible to the owner of that domain.
  • Provide an open auditing and monitoring system that lets any domain owner or the current DISA CA determine whether certificates have been mistakenly or maliciously issued. DISA has a valid process but it is fairly costly to upgrade.
  • Protect DoD users (as much as possible) from being duped by certificates that were mistakenly or maliciously issued.

Certificate Transparency strengthens the chains of trust that extend from the CA all the way down to individual servers, making HTTPS connections more reliable and less vulnerable to interception or impersonation. But what’s more, as a general security measure, Certificate Transparency helps guard against broader Internet security attacks, making browsing safer for all users.

One significant use case, which was the impetus of the paper, is that many of the sensors, actuators and other micro-machines used in Navy tactical networks that will function as eyes, ears and hands for the Warfighter will work on scant electrical power and use circuitry far more limited than the chips found in even the simplest cell phone. The problem of securing data in this sort of constrained environment is difficult for the existing physical Navy networks. These solutions typically use symmetric cryptography—the less resource-intensive form, in which both the sender and recipient have an advance copy of a digital key that can encrypt and decrypt messages. These solutions typically use symmetric cryptography—the less resource-intensive form, in which both the sender and recipient have an advance copy of a digital key that

can encrypt and decrypt messages. Our Zero Trust cryptography solution specifies that these algorithms should provide one useful tool in symmetric crypto applications: authenticated encryption with associated data, or AEAD, which allows a recipient to check the integrity of both the encrypted and unencrypted information in a message. They also stipulate that if a hash function is used to create a digital fingerprint of the data, the function should share resources with the AEAD to reduce the cost of implementation.

Our cryptography libraries incorporate post-quantum cryptography which is focused on Lattice-based cryptography, Hash-based cryptography, Code-based cryptography (with a focus on quantum key distribution) and DoD based the Public-Key System (PKI) which has the properties of forward secrecy if it generates one random secret key per session to complete a key agreement, without using a deterministic algorithm.

Attack surfaces are increasing and attack surfaces are layered. Discovery of code paths that handle user input lead to an increase in attack surface. Further, our focus is on the broader development of the zero trusted identity ecosystem using disruptive technologies such as mobile platforms, blockchains, robotic systems (autonomous robotic systems need to be highly-capable, perceptive, dexterous, and mobile systems that can operate safely in collaboration with humans, are easily tasked, and can be quickly integrated into the rest of the DoD enterprise or tactical networks) and new adventures in a new keyless scalable digital signature based authentication for electronic data, machines and humans. Basically, our approach protects PIV. In other words, our approach focuses on Digital Identity, which expands Security and Privacy from compliance & prevention tools to critical business and acquisition enablers and provides a risk-based prioritization of gaps in capabilities for a robust and mature cyber secure environment for the Warfighter.

As an example, our Zero Trust developed platforms use a cross environment set of APIs that enables a secure SDK to communicate to other devices – securely, validating, verify and authenticating– across the a full ecosystem of devices into a tool that can also write to multiple cloud databases, transmitting large data sets and perform searches across several of these databases simultaneously.

Our focus in this exercise is DoD credential theft. The challenges of securing cloud-based infrastructures, things like secret management, key management, data protection, and application identity are paramount and don’t exist currently. With that said, how to mix our VM-based applications, our container-based applications, and our serverless applications to where all of these operate as a single, logical app. Currently this infrastructure does not exist. Using new encryption protocols, the Zero Trust model could potentially enable end-to-end secure Web apps; enable end-to-end secure mailing lists, group chat apps with a cooperative key agreement. Hence, a way to apply multiple-key techniques is based on protecting data at rest. Current Navy applications are vulnerable as well. For this purpose, our ZT model provides each Navy user has a unique asymmetric key pair consisting of a public and a private key based on Elliptic Curve Cryptography (ECC). These two keys are mathematically related, but it is not technically feasible to calculate the private key given only the public key.

Credential-stealing attacks tend to bypass traditional DoD IT security software. But attacks are complex and multi-step. Being able to detect them in process, and to respond quickly and effectively enough to kick attackers out and restore security, is essential to resilient network security today.

Vulnerabilities are still critical. Fixing vulnerabilities is still vital for security, and introducing new vulnerabilities into existing systems is still a disaster. But strong authentication and robust incident response are also critical.

The Internet is an open system, where the identity communication path is non-physical and may include any number of eavesdropping and active interference possibilities. Thus, Internet communication is much like anonymous postcards, which are answered by anonymous recipients. However, these postcards, open for anyone to read—and even write in them—must carry messages between specific endpoints in a secure and private way. The solution is to use encryption (to assure privacy) and certification (to assure that communication is happening between the desired endpoints and that it is tamperproof). Our concept and approach deals with the question of certification in the DoD. The closely related question of encryption is also referred to, in order to set the various certification stages. The problems that may be caused by false certification or no certification mechanisms can range from a “man- in-the- middle” attack in order to gain knowledge over controlled data, to a completely open situation to gain access to data and resources.

X.509 focuses on defining a mechanism by which information can be made available in a secure way to a third- party—the certificate itself. However, X.509 does not intend to address the level of effort which is needed to validate the information in a certificate neither define a global meaning to that information outside the CA’s own management acts.

Why X.509? A Common Access Card (CAC) is a smart card used for identification of active-duty military personnel, selected reserve, US Department of Defense (DoD) civilian employees and eligible contractor personnel. In addition to providing physical access to buildings and protected areas, it also allows access to DoD computer networks and systems satisfying two-factor authentication, digital security and data encryption. It leverages a PKI Security Certificate to verify a cardholder’s identity prior to allowing access to protected resources. DISA, by default, generates a self-signed certificate for a secure connection with the browser. The default DISA certificate has a basic constraint for security reason – the DISA certificate is restricted to server and client authentication and cannot be used as a Certificate Authority (CA) certificate. However, DISA can use the signed enterprise certificate authority X.509 certificate.

The request for and presentation of the client certificate happens during initial SSL session establishment. There are two core elements to the process of a user gaining access to an application with CAC:

  • Authentication – occurs during SSL session establishment and entails:
  • Verifying the certificate date
  • Verifying revocation status using Online Certificate Status Protocol (OCSP)
  • Verifying the full chain to the Certificate Authority (CA)
  • Authorization – occurs after SSL session establishment and the matching of the certificate Subject Alternative Name (SAN) against the User Principal Name (UPN) of the appropriate principal in Active Directory.

This document and approach is consistently aware of context capability gaps in perimeter networks and the new concepts of Zero Trust. Orchestration and automation is key enabler to a new paradigm shift for the Navy and other MilDeps. Credential hardening is a key enabler for trust in this new disruptive technology. This document suggests ways to protect Navy credentials and other aspects of their enterprise applications using new rooted trust concepts such that:

  • How cloud and mobility are driving Navy enterprise transformation
  • The challenge of legacy Navy network-centric methods
  • Why a Navy user-and application-centric approach strengthens security model based on Zero Trust
  • How to support access to internal apps from any device, anywhere

Conventional security models operate on the outdated assumption that everything on the inside of an organization’s network can be trusted, but given increased attack sophistication and insider threats, new security measures need to be taken to stop them from spreading once inside. Because traditional security models design to protect the perimeter, threats that get inside the network are left invisible, uninspected and free to morph and move wherever they choose to successfully extract sensitive, valuable business data. Knowing who is using the applications on these MilDep networks, and who may have transmitted a threat or is transferring files, strengthens security policies and reduces incident response times. This document focuses on capability gaps with visibility in how a Navy user retrieves their data.

For example, visibility into the application activity at a user level, not just an IP address level, allows the Navy user to more effectively enable the applications traversing the network. They can align application usage with warfighting requirements and, if appropriate, inform users that they are in violation of policy, or even block their application usage outright.

How this document defines movement or access is based on who the user is and the defined appropriate interaction. This document attempts to integrate current identity management (IAM), risk management and cyber and authentication frameworks, which enables precise access control through policy-based multi-factor authentication. This disrupts the use of stolen credentials based on X.509. DoD encrypted traffic is on an explosive upturn. Cyber adversaries are using encryption to hide from security surveillance and bypass security controls. What this means is even the Navy with mature security measures in place can be breached if they’re not securing encrypted traffic.

Hence, how does the Navy gain visibility and context for all their traffic – across user, device, location and application; maybe we call this cross domain authentication using conventional X.509 certs and keys – plus zoning capabilities for visibility into internal traffic which enables micro- segmentation of perimeters, and acts as border control within the Navy.

NSA FIPS 140-2 security is limited to keys and key operations. Our proposed Zero Trust platform recommends Integrated HSM, KMS, Encryption and Tokenization functionality. Support for full NSA Suite B algorithms and CSfC mobility: RSA, AES, and Elliptical Curve (our code structure where the elliptic curve is a set of pairs of field elements. Each pair is a “point”, the field elements contained in a point are “coordinates”, and the coordinates of each point must satisfy some equation which defines the curve. The on_curve (P) function returns true if a purported point P satisfies the curve equation.

The elliptic curve also defines an addition operation between points, and an operation for negating points. Together with the identity point, these operations define a group structure on the curve’s points. Adding a point P to itself k times (P + P + P +…) is scalar multiplication by the scalar k, represented as kP. The default hash function is SHA-512. We define hash as a function that applies the cryptographic hash to an input byte sequence, and returns an integer, which is the output from the cryptographic hash parsed in little-endian form. An example is The Curve25519 elliptic curve. Use case: Pre-hashing. Except for XEdDSA verification, the signing and verification algorithms hash the input message twice. For large messages this could be expensive, and would require either large buffers or more complicated APIs.

To prevent this, APIs may wish to specify a maximum message size that all implementations must be capable of buffering. Protocol designers can specify “pre-hashing” of message fields to fit within this. Designers are encouraged to use pre-hashing selectively, so as to limit the potential impact from collision attacks (e.g. pre-hashing the attachments to a message but not the message header or body). Pre-hashing: Except for XEdDSA verification, the signing and verification algorithms hash the input message twice. For large messages this could be expensive, and would require either large buffers or more complicated APIs.

To prevent this, APIs may wish to specify a maximum message size that all implementations must be capable of buffering. Protocol designers can specify “pre-hashing” of message fields to fit within this. Designers are encouraged to use pre-hashing selectively, so as to limit the potential impact from collision attacks (e.g. pre-hashing the attachments to a message but not the message header or body). In theory, under some circumstances it is safe to use a key pair to produce signatures and also use the same key pair within certain Diffie-Hellman based protocols. More discussion and a possible pilot will exact this implementation for key operations in the future.

This action performs broad cryptographic operations and key management operations, including key generation, key import, key rotation, key derivation, encryption, decryption, signing, verification, tokenization, and masking. This is a unified data protection process within Zero Trust.

To gain traffic visibility and context, it needs to go through a next- generation firewall with decryption capabilities. The next-generation firewall enables micro-segmentation of perimeters, and acts as border control within your organization. While it’s necessary to secure the external perimeter border, it’s even more crucial to gain the visibility to verify traffic as it crosses between the different functions within the network. Adding two factor authentication and other verification methods will increase your ability to verify users correctly. Leverage a Zero Trust approach to identify a Navy business processes, Navy users, Navy big data, Navy data flows, and associated risks, and set policy rules that can be updated automatically, based on associated risks, with every iteration.

For example, virtualization technology and framework that allows for the creation, configuration, and execution of the various function areas and virtual machines of a Navy system can assimilate to a Zero Trust architecture for mobile devices. This technology validates and verifies which configurations and systems are to be booted and loaded onto a device. Our approach leverages Type 1 Virtualization to split the mobile device functions into multiple virtual machines (VMs) to allow for greater operational integrity, more granular system control, and a reduced attack surface. Various rules of operation govern the interactions across functional areas and between virtual machines. These rules ensure that the system functions in very specific ways. Boot integrity validation, isolated DIT (VPN), isolated and controlled storage (eMMC), isolated userland (Android OS and Apple iOS), and isolated cryptographic functions (Entropy and Key Store) are key for validation and verification for end point security using Zero Trust. Cryptographic keys for encrypted areas of the system are controlled in this VM. Having these keys isolated from the rest of the system is critical for system-wide assurance. The entropy VM is responsible for pseudo random number generation for various cryptographic functions of the system. The reliable generation of cryptographically sound random numbers is vital to the system security and integrity. For example, the eMMC VM controls the data streams that flow in or out of the device’s eMMC (physical storage). The isolation of storage functionality allows for highly managed control of data being stored on the device. Mobility is essential to the Navy both from an enterprise network and as well a tactical network using CSfC capability packages. Zero Trust is a prime candidate for CSfC.

Let’s discuss how we intend to protect these certificates from being compromised integrating our Zero Trust model. X.509 public key certificates have become an accepted method for securely binding the identity of an individual or device to a public key, in order to support public key cryptographic operations such as digital signature verification and public key-based encryption. However, prior to using the public key contained in a certificate, an application first has to determine the authenticity of that certificate, and specifically, the validity of all the certificates leading to a trusted public key, called a trust anchor. Through validating this certification path, the assertion of the binding made between the identity and the public key in each of the certificates can be traced back to a single trust anchor. The process by which an application determines this authenticity of a certificate is called certification path processing. Certification path processing establishes a chain of trust between a trust anchor and a certificate. This chain of trust is composed of a series of informational certificates known as a certification path. A certification path begins with a certificate whose signature can be verified using a trust anchor and ends with the target certificate. Path processing entails building and validating the certification path to determine whether a target certificate is appropriate for use in a particular application context.

Additionally, the need to develop complex certification paths is increasing. Many PKIs are now using complex structures rather than simple hierarchies. Additionally, some enterprises are gradually moving away from trust lists filled with many trust anchors, and toward an infrastructure with one trust anchor and many cross-certified relationships. When verifying X.509 public key certificates, often the application performing the verification has no knowledge of the underlying PKI that issued the certificate. PKI structures can range from very simple, hierarchical structures to complex structures such as mesh architectures involving multiple bridges. Case in point as an example: The path-building algorithm then ideally becomes a tree traversal algorithm with weights or priorities assigned to each branch point to guide the decision-making. If properly designed, such an approach would effectively yield the “best path first” more often than not. Given the simplifying idea of addressing path building as a tree traversal, path building could be structured as a depth first search. The goal of an efficient path-building component is to select the fourth path first by testing properties of the certificates as the tree is traversed.

The greatest security risks associated with this document revolve around performing certification path validation while certification paths are built. In addition, as with any application that consumes data from potentially untrusted network locations, certification path-building components should be carefully implemented so as to reduce or eliminate the possibility of network-based exploits. For example, a poorly implemented path-building module may not check the length of the CRLDP (Certificate Revocation List Distribution Point) URI (Uniform Resource Identifier) before using the C language strcpy() function to place

the address in a 1024 byte buffer. This is an example of an access policy with all the associated elements needed to retrieve CRLs using CRLDP. Notice that you must add either the Client Cert Inspection agent or On- Demand Cert Auth agent before the CRLDP object in your access policy. One of those agents is required in order to receive the X.509 certificate from the user. This is also important because both agents store the user information, as well as the issuer certificates, in the session variables. This allows the CRDLP Auth agent to check the revocation status of the user’s certificate.

A hacker could use such a flaw to create a buffer overflow exploit by encoding malicious assembly code into the CRLDP of a certificate and then use the certificate to attempt an authentication. This means that the URI identifier type specifies the identifier associated with the certificate’s intended usage with a given Internet security protocol. For example, an SSL/TLS server certificate would contain the server’s DNS name (this is traditionally also specified as the CommonName or CN) an S/MIME certificate would contain the subject’s email address; an IPSec certificate would contain a DNS name or IP address; and a SIP certificate would contain a SIP URI. A modicum of common sense is assumed when deciding upon an appropriate URI field value.

Presenting spurious CA certificates containing very large public keys can also create a DOS attack. When the system attempts to use the large public key to verify the digital signature on additional certificates, a long processing delay may occur. This can be mitigated by either of two strategies. The first strategy is to perform signature verifications only after a complete path is built, starting from the trust anchor. This will eliminate the spurious CA certificate from consideration before the large public key is used. The second strategy is to recognize and simply reject keys longer than a certain size.

For example, what if the choice between CA certificates is not binary as it was in the previous example? What if the path-building software encounters a branch point with some arbitrary number of CA certificates thereby creating the same arbitrary number of tree branches? This document proposes a way to solve some of these issues.

Traditionally, DoD asynchronous messaging systems such as email have relied on protocols like S/MIME for cryptographic security. These protocols work the way most people are familiar with: one who wishes to receive encrypted email advertises a public key, and those wishing to send encrypted email to that person encrypt their outgoing message with that public key. If an attacker were to record all of a target’s ciphertext traffic over some extended period of time, and then compromise that one key at any point in the future (perhaps by seizing the device it’s on), they would have the ability to decrypt all of the previously recorded ciphertext traffic belonging to the target. That is a fundamental problem.

These types of asynchronous transports pose a fundamental problem for forward secrecy protocols: in order to send a message, the app first needs to complete a key exchange. However, to complete a key exchange requires a full round trip of sending a key exchange message and waiting for a key exchange message response – in a world where there is no guarantee of a rapid response. Our Zero Trust model provides forward secrecy in a fully asynchronous environment by using ephemeral key exchanges for each session. With our approach with ephemeral key exchange, there is no key to compromise in the future (since the keys are only ephemerally in memory for a short time), so any recorded ciphertext should remain private. For example, our libraries uses a collection of algorithms to provide a simplified interface for protecting a message using what is called “public-key authenticated encryption” against eavesdropping, spoofing and tampering. By default, and as implemented in in Zero Trust advance authentication, it uses the following algorithms:

  • Key derivation: Elliptic Curve Diffie-Hellman (ECDH) over the curve Curve25519
  • Symmetric encryption: XSalsa20
  • Authentication and integrity protection: Poly1305-AES

The ZT model’s Random Number Generation for the following purposes, listed by descending order of randomness quality required:

  • Private key generation
  • Symmetric encryption of media files
  • Nonces
  • Backup encryption salt
  • Padding amount determination

Nonces and salts must never repeat, but they are not required to be hard to guess. Due to the requirement for very high quality randomness when generating the long-term private key, the user is prompted to generate additional entropy by moving a finger on the screen. The movements (consisting of coordinate values and high-resolution timestamps) are continuously collected and hashed for several seconds. The resulting entropy is then mixed (XOR) with entropy obtained from the system’s RNG.

The reason why HMAC-SHA256 is used instead of plain SHA256 is not for obfuscation, but as a best practice to ensure that hashes generated by our ZT system are unique and do not match those of any other application in the ecosystem. Obviously the keys need to be the same for all users, as random salting (such as is used when hashing passwords for storage) cannot be used here because the hashes of all users must agree so that matching contacts can be found.

DoD PKE has been tailored to enable secrecy, obfuscation and identity verification but it does require a large amount of trust be vested in one or more trust anchors; from the DISA CA and the DISA PKE office to internal Certificate Management Systems and the Certificate Revocation Lists themselves; all governed by DISA. Zero Trust can be used to secure a PKI infrastructure and/or enhance CRLs by automating Certificate Revocation. The state of cyber attacks drives the DoD to take the “assume breach” mindset, but this approach should not be limiting. Threat protection is essential to protect sensitive DoD credentials. Gating access to resources using dynamic trust decisions allows a Navy enterprise to enable access to certain assets from any device while restricting access to high-value assets on enterprise-managed and compliant devices based on NSA and NIST guidelines. In targeted and data breach attacks, attackers can compromise a single device within a Navy enterprise, and then use the “hopping” method to move laterally across the network using stolen credentials. A solution based on Zero Trust network, configured with the right policies around user and device trust, can help prevent stolen network credentials (X.509 certificates) from being used to gain access to a network. It combines built-in behavioral sensors, machine learning, and security analytics to continuously monitor the state of devices and take remedial actions if necessary.

In this document, we propose a cloud-architected Zero Trust Privilege, which is designed to handle requesters that are human, but also machines, services, and APIs. There will still be shared accounts, but for increased assurance best practices now recommend individual identities, not shared accounts, where least privilege can be applied. All controls must be dynamic and risk-aware, which requires modern machine learning and user behavior analytics. A Zero Trust Privilege approach might help a Navy enterprises grant least privilege access based on verifying who is requesting access, the context of the request, and the risk of the access environment. By implementing least privilege access, Zero Trust Privilege minimizes the attack surface, improves audit and compliance visibility, and reduces risk, complexity, and costs for the

modern, hybrid enterprise. Properly verifying WHO means leveraging enterprise directory identities, eliminating local accounts and decreasing the overall number of accounts and passwords, reducing the attack surface.

A potential use case for Zero Trust is that the warfighter also has requirements for essential cross-domain functionality. This technology has to execute programmable rule sets that filter information (messages), allowing individual messages or data fields within them to be selectively passed, blocked, or changed. This method ensures data security on both networks and automates the need for time consuming “man in the middle” screening of message exchanges. This is a tactical requirement specifically that enables information sharing across different security domains in tactical vehicles, aircraft and dismounted navy/marine/airman/coast guard/soldier systems. The requirement provides a low cost, small Size, Weight, and Power (SWaP); rugged, tamper-resistant cross-domain solution that is ideal for almost any vehicle, mobile shelter, ground sensor system, aircraft or UAV.

Based on various Navy Cloud requirements, one could incorporate conditional access policies, evaluate in real-time and enforce when a Navy user attempts to access any Azure AD-connected application, for example, SaaS apps, custom apps running in the cloud, or on-premises web apps. When suspicious activity is discovered, Azure AD helps take remediation actions, such as block high-risk users, reset user passwords if X.509 credentials are compromised, enforce DoD policies for Terms of Use, and other RMF and DoD Cyber policies. Conditional access policies can be configured using a device management framework that works by using the protocols or APIs that are available in the mobile operating systems (iOS and Android) in two ways:

  • App-based: Only managed applications can access corporate resources
  • Device-based: Only managed and compliant devices can access corporate resources

The Zero Trust concepts are a fundamental shift toward a data centric security architecture. It is assumed that adversaries can breach the network perimeter, which means that the primary focus should be on security close to the most important data. To enable this capability, the US Navy plans on deploying database security gateways, to ensure that even if an adversary is able to access the network, the databases are still protected. Protecting DoD credentials from various appliances, end user devices and service accounts in the perimeter networks advancing to the Cloud is critical to the DoD Cyber and Risk Management guidelines and policies. Zero Trust controls need to be adaptive to the risk-context. Zero Trust means knowing that even if the right credentials have been entered by a user, but the request comes in from a potentially risky location, then a stronger verification is needed to permit access. Modern machine learning algorithms are now used to carefully analyze a privileged user’s behavior and identify “anomalous” or “non-normal” (and therefore risky) activities, and alert or notify security.

Most Zero Trust models were not developed with the US MilDeps in mind. As a matter of DoD policy, authentication is a very hard problem for various devices using a MilDep enterprise networks. When using a zero- trust mindset, there are multiple ways to set up authentication to build security into zone based sessions — this can be device-based, user-based or a combination of the two. It is imperative that Zero Trust architectures use a rigid paradigm shift, especially for the DoD. A zero trust architecture demands a shift from simply identifying known malicious signatures/activities to a model of behavioral threat analytics, constantly examining logs for anomalous activity that may not immediately rise to the level of malicious, but must be examined. The Navy, for example, has invested in a robust Security Information Event Management (SIEM) capability, which will allow management and auditing of these logs in order to identify anomalous events.

For example, NAVSEA requirements Windows Server 2016 and Windows 10 both contain features that Microsoft has developed as part of their approach to providing zero trust capabilities, which focus on credential hardening. PMW commands plan to leverage Windows Server 2016’s Privileged Access Management (PAM) capabilities to better restrict access to elevated privileges. Windows 10 additionally has many capabilities that allow enterprises to configure them to harden credentials. These new capabilities leverage the already extensive capabilities of Active Directory, though it is not yet known if they can provide as mature an identity management system as other purpose built systems that are being explored by the Navy, such as Okta, Radient Logic, and CyberArk just to mentioned a few industry partners aligned to Zero Trust. Additionally, both systems can consume either Azure Active Directory, or an on premise Active Directory. The Navy is actively using both deployment options, with on premise Active Directory handling most of NMCI’s directory services, and Azure Active Directory supporting the Office 365 pilot aligned for a Zero Trust implementation in FY 2019. OPNAV supports new endeavors with new PKI alternatives and Zero Trust based platforms. Orchestration and automation is key enabler. Modern configuration management systems can both maintain a device inventory and automate the data plane configuration.

Adopting a risk-based approach to cyber security, leveraging the Risk Management Framework (RMF), is another key technology enabler of zero trust. Various PMW commands in the Navy have been pursuing this strategy in their Cyber Security Roadmap.

As zero trust relies on knowledge not just of the user credentials, but of the endpoint the user is using to access a system, a robust Endpoint Detection and Response (EDR) system is a crucial technology enabler of a zero trust model. To this end, NAVSEA has invested in Tanium EDR to replace HBSS and provide a more mature EDR system, which will allow visibility into the endpoint status when making authentication decisions.

While the zero trust concept places less emphasis on the network perimeter, Network Access Control is still a key functionality that is not abandoned. While some organizations may place all critical assets on networks which are fully reachable by the public internet, and simply provide a robust authentication mechanism before the resource is accessed, as a military organization, the Navy has a different need than a commercial company. The Navy has invested in an Enhanced Network Access Control (eNAC) system to ensure that all devices connected to the network are trusted. While NAC can still be employed in a zero trust network, it does not fulfill the zero trust device authentication requirements due to its distance from the remote endpoint. This capability will need to be extended, as a key goal of zero trust is the ability for users to be able to authenticate from anywhere without the use of a traditional VPN. This goal may or may not serve the Navy’s unique needs as a military organization, though it should be considered.

Additionally, there are several key enablers that are not actively being supported which will need to be in place before a truly mature zero trust capability can be claimed, including Single Sign On (SSO), Cloud Access Security Brokers, and Cloud Data Loss Prevention.

The perimeter has melted and zones are no longer properly trusted, so it’s important to have all the sessions properly authenticated. This can be done using X.509 certificates and a user account that uses two-factor authentication. Using a combination of these methods can create stronger authentication variables and enable finer access to resources. After being properly authenticated into a zero-trust network, these authentication variables can also be used as decision points to gain access to resources.

When implementing a zero-trust network, there needs to be an understanding of how authorization should be handled. Authorization in zero trust architecture is indispensable when determining what resources and data will be allowed on devices.

Zero trust networking depends on the principle of least privilege, as it understands that people and devices are authenticating from different locations and applications. A policy must be created to allow this to occur; single forms of authentication sufficient to perform authorization under the zero-trust mantra are no longer sufficient.

We need to take into account what can be used to identify and authorize an identity in a zero-trust network. This means creating a policy based on a combination of system and user accounts; doing so results in a unique authorization decision that uses the variables of this request. The policy might also include anything about the authorization request that a policy is expecting to fulfill granular access, such as the destination, IP address, hardware information, risk and trust scores, and authentication methods. In a zero trust network, users should always be given the least level of privilege necessary until there is a valid need to escalate their access.

Zero Trust does not require trust authorities and facilitates automated verification. The signatures are devoid of any secret data and can be used to mathematically verify the integrity of the data, providing non- repudiation, while also protecting against backdating.

PKI vendors are developing CA suites to address the scalability and portability challenges associated with automated certificate management for large-scale (such as IoT and M2M) identity management (IAM). This new Zero Trust IaaS/PaaS platform can assist these PKI platform vendors to ensure the coherence and real-time resilience of their platforms, as well as strongly backstop the authenticity of identities on their ever-growing networks in a cost-effective, scalable, and compliance-related manner.

This PKI has many issues but is nearly impossible to replace. Leveraging recent progress in verifiable computation, we propose a novel use of existing X.509 certificates and infrastructure. Instead of receiving & validating chains of certificates, Navy applications receive & verify proofs of their knowledge, their validity, and their compliance with application policies. This yields smaller messages (by omitting certificates), stronger privacy (by hiding certificate contents), and stronger integrity (by embedding additional checks, e.g. for revocation).

X.509 certificate validation is famously complex and error prone, as it involves parsing ASN.1 data structures and interpreting them against diverse application policies. To manage this diversity, we propose a new format for writing application policies by composing X.509 templates, and our approach, using Zero Trust, provides a template compiler that generates C code for validating certificates within a given policy. We then use the Geppetto cryptographic compiler to produce a zero-knowledge verifiable computation scheme for that policy. To optimize the resulting scheme, our platform develops new C libraries for RSA-PKCS#1 signatures and ASN.1 parsing, carefully tailored for cryptographic verifiability. From a privacy standpoint, because modern verifiable computation protocols support zero knowledge properties for the prover’s inputs, our Zero Trust validation process enables the selective disclosure of information embedded in standard X.509 certificates. Instead of revealing these certificates in the clear, the prover can convey only the attributes needed for a particular application. For example, the outsourced computation can validate the prover’s certificate chain, and then check that the issuer is on an approved list, that the prover’s age is above some threshold, or that he is not on a blacklist.

An essential limitation of systems based on succinct proofs of outsourced computation is their reliance on a trusted party to generate the cryptographic keys. An honest key generator uses a randomly selected value to create the scheme’s keys and then deletes the random value. A rogue validation key provider, however, could save the random value and use it as a backdoor to forge proofs for the associated policy. Dangerously, and in contrast to the use of rogue certificates, such proofs would be indistinguishable from honest proofs and thus undetectable.

Besides careful key management and local checks to reduce the scope of policies, a partial remedy is to generate keys using a multi-party protocol, which only requires that one of the parties involved is honest. A trust model.

Our Zero Trust model must meet the following criteria:

  • Completeness: If the prover is providing the right password, the verifier will be convinced that it’s actually the right password.
  • Soundness: The verifier will be convinced, if and only if the prover is entering the right password.
  • Zero-Knowledgeness: The verifier must not learn the password.

Our zero-knowledge system operates on the concept that the system has no knowledge about the content of data provided by users. Therefore, in an implementation of zero- knowledge encryption, a private key, known only to the user, is used to encrypt a given set of data before it is copied to the server, which then manages the encrypted files. In conjunction with other security measures, the use of a key restricts the ability to decrypt the data to the user who originally stored it and creates social sustainability. This paper analyzes the comparative advantages of zero knowledge and traditional security schemes in cloud data storage.

Zero knowledge encryption is the joined application of the security concepts of zero knowledge proofs and encryption. Zero knowledge proofs are a means of proving one’s possession of given knowledge without revealing any information to a verifier that can be used to reconstruct this knowledge. A zero knowledge proof is defined by the principles of completeness, that the pieces of revealed information can be independently verified by the verifier, soundness, that as the quantity of verified information approaches the complexity of the original problem, the probability of the solution being valid approaches one hundred percent, and perfectness or Zero Knowledgeness, that the verifier cannot reconstruct the original knowledge from the revealed information.

Encryption is a means of obfuscating information that makes use of the discrete logarithm problem. These are calculations that are easy to compute but nearly impossible to solve in the reverse direction without knowing the factor, known as a key, used to originally compute them.

Our proposal employs a fast and lightweight zero knowledge proof algorithm, which provides security of the conventional PKI. It involves client-side hashing of the userid/password to transparently generate a key-pair and register the public key with the remote service provider. The server then generates aperiodic challenges and a lightweight JavaScript based client application computes the responses to each server-side challenge. Just to be clear, zero knowledge proofs aim to prove (probabilistically) a statement without revealing any information. RSA encrypts data with a public key and decrypts it with a private key. ZKP aims to take a logical proof and show that its result is correct though PKI. Some new approaches to solving various aspects we mention is a large number of such protocols are known for languages based on discrete logarithm problems, such as Schnorr’s protocol and many of its variants, e.g., for proving that two discrete logs are equal. This last variant is useful, for instance, in threshold RSA protocols, where a set of servers hold shares of a private RSA key, and clients can request them to apply the private key to a given input.

This model includes a trusted functionality for setting up a private/public key pair individually for each player (in fact, we only need this for the verifiers). Hence, unlike, the key setup is not tied to a particular prover/verifier pair: it can be implemented, for instance, by having the verifier send her public key to a trusted “certification authority” who will sign the key, once the verifier proves knowledge of her private key. Now, any prover who trusts the authority to only certify a key after ensuring that the verifier knows her private key can safely (i.e., in zero-knowledge) give non-interactive proofs to the verifier. Our technique requires homomorphic

public-key encryption, and it preserves the communication complexity of the original protocol up to a constant factor.

For example, there are many different RSA encryption mechanisms in the literature. The oldest mechanisms use RSA to directly encrypt a user’s message; this requires careful padding and scrambling of the message. Newer mechanisms generate a secret key (for example, an AES key), use the secret key to encrypt and authenticate the user’s message, and use RSA to encrypt the secret key; this allows simpler padding, since the secret key is already randomized. The newest mechanisms such as Shoup’s “RSA-KEM” simply use RSA to encrypt log n bits of random data, hash the random data to obtain a secret key, and use the secret key to encrypt and authenticate the user’s message; this does not require any padding.

Generating large amounts of truly random data is expensive. Fortunately, truly random data can be simulated by pseudorandom data produced by a stream cipher from a much smaller key. (Even better, slight deficiencies in the randomness of the cipher key do not compromise security.)

An immediate consequence of our results is non-interactive threshold RSA and discrete-log based cryptosystems without random oracles, and assuming only that each client has a registered key pair. In the context of threshold cryptography where keys must be set up initially anyway, this does not seem like a demanding assumption. Our protocols are as efficient as the best-known previous solutions (that required random oracles) up to a constant factor. This associate Zero Trust system is very functional and can always be implemented using a more standard PKI with the DoD CA, and generic zero-knowledge techniques. The verifier sends her public key to the DoD CA and proves in zero-knowledge that she knows a set of randomness that, using the given key-generation algorithm, leads to the public key she sent. One can see that all that is needed is knowledge of the challenge value e and of the RSA modulus n, plus assurance that e lies in the proper interval and that n is well formed. (Knowledge of the factorization of n, in particular, is not required.)

The X.509 PKI is slow to change; it is not uncommon for the DISA certification authority to use the same root and intermediate certificates for years, without upgrading the cryptographic primitives used for signing. For instance, the MD5 hashing algorithm remained widely used for several years after Wang et al. demonstrated that it was vulnerable to collision attacks. The ongoing migration from SHA1 to SHA2 has also been delayed by over a year, due to pressure to maintain legacy support. Similarly, a number of certification authorities have allowed the use of short RSA public keys, or keys generated with low entropy. As a moot point, the validation Zero Trust validator, outsources to the prover all of the checks that the verifier would have done on those certificates, and the verifier’s job is simplified to checking only that the outsourced validation was performed correctly. This may partially mitigate these issues by allowing certificate owners to hide their public keys and certificate hashes, and hence to prevent some offline attacks; arguably, it also makes such issues harder to measure.

Mistakes and inconsistencies in implementations of X.509 have led to dozens of attacks. Famously, the first version of X.509 did not include a clear distinction between the DISA CA and endpoint certificates used by various Navy enterprise servers; e.g., Purebred key server.

Similarly, many implementations of DER are incorrect, leading to universal forgery attacks against PKCS#1 signatures; again, variations of this attack have reappeared every so often. In contrast, our new Zero Trust validation server does not trust X.509 parsers; instead, it verifies the correctness of untrusted parsing by re-serializing and hashing.

X.509 defines the syntax and semantics of public key certificates and their issuance hierarchy. The purpose of a certificate is to bind a public key to the identity of the owner of the matching private key (the subject), and to identify the entity that vouches for this binding (the issuer). Certificates also contain lifetime information, extensions for revocation checking, and extensions to restrict the certificate’s use. The PKI’s main high-level API is certificate-chain validation, which works as follows: given a list of certificates (representing a chain) and a validation context (which includes the current time and information on the intended use), it checks that: 1) the certificates are all syntactically well formed; 2) none of them is expired or revoked; 3) the issuer of each certificate matches the subject of the next in the chain; 4) the signature on the contents of each certificate can be verified using the public key of the next in the chain; 5) the last, root certificate is trusted by the caller; and 6) the chain is valid with respect to some context-dependent application policy (e.g. “valid for signing emails”). If all these checks succeed, chain validation returns a parsed representation of the identity and the associated public key in the first certificate (the endpoint).

As a concrete running example, consider a client who wishes to sign her email using the S/MIME protocol. She holds a certificate issued by a well- known CA for her public key, and she uses her corresponding private key to sign a hash of her message. With the current DoD PKI, she attaches her certificate and signature to the message. The recipient of the message extracts the sender’s email address (from), parses and checks the sender’s certificate, and verifies, in particular, that the sender’s certificate forms a valid chain together with a local, trusted copy of the CA certificate; that its subject matches the sender’s address (from); and that it has not expired. Finally, he verifies the signature on a hash of the message using the public key from the sender’s certificate. These checks may be performed by a C function, declared as void validate (SHA2 hash, char from, time now, CHAIN certs, SIG sig ) ; and for simplicity, assume that all S/MIME senders and receivers agree on this code for email signatures, with a fixed root CA.

With our new concept of Zero Trust validation process, we compile validate into cryptographic keys for S/MIME, i.e., an evaluation key and a verification key. Navy Outlook email signature validation then proceeds as follows. The sender signs the hash of her message as usual, using the private X.509 key associated with her certificate. Instead of attaching her certificate and signature, however, she attaches a new proof, meaning that a legitimate prover can always produce a proof that satisfies Verify; Zero Knowledge, meaning that the verifier learns nothing about the prover’s input w; and sound, meaning that a cheating prover will be caught with overwhelming probability. This system offers strong asymptotic and concrete performance: cryptographic work for key and proof generation scales linearly in the size of the computation (measured roughly as the number of multiplication gates in the arithmetic circuit representation of the computation), and verification scales linearly with the verifier’s IO (e.g., |u| + |y|), regardless of the computation, with typical examples requiring approximately 10 ms. The proofs are constant size (288 bytes).

To generate this proof, she calls the Zero Trust validation server with the S/MIME evaluation key, her message hash, email address, time, certificate, and signature. The Zero Trust server runs validate on these arguments and returns a proof that it ran correctly. Instead of calling validate, the recipient calls the Zero Trust validation server with the S/MIME verification key and the received message’s hash, its from field, and it’s proof. Although the certificate and signature never leave the sender’s machine, the recipient is guaranteed that validate succeeded, and hence that the message was not tampered with.

These certificates are the staple solution today for verifying that a public key belongs to an individual. These technologies are used in conjunction with everything from web-based browser authentication to e-commerce and identity management for access to government and commercial e- services.

The policy may incorporate some application logic or even algorithms not implemented by the certificate verifier. The policy may be deployed, e.g., as a key in the configuration of the client banking application, or by re- using existing public key management mechanisms, such as key pinning. We expect such ad hoc, application-specific deployment of policies to help break the circular dependency of servers only wanting to use a policy once supported by almost all clients. Our concept of the Zero Trust validation server policies also enables some emancipation from traditional certificate issuers, notably root CAs, who can currently impersonate any of their customers. As an example, a DoD S/MIME policy may require that a class of official mail be signed by two certificates, issued by two independent CAs (hence the DoD CA and the Navy VM CA), or that the sender certificate be endorsed by some independent organization yet to be named. Similarly, just pushing a new key to the browser or the client software can deploy such policies.

For example, instead of installing a root certificate key to access some exotic service, installing a Zero Trust key for that service is more specific and more versatile, inasmuch as the client, or some trusted third party, can review the precise policy associated with the key.

Empirically, many past vulnerabilities have been due to bugs in X.509 certificate parsing and validation code using the DoD CAC accessing certain web portals using ill advised browsers, for example in their handling of ill-formed certificates, or their lax interpretation of certificate- signing flags, and each of those bugs required a full software patch. Our technical answer to this class of problem is to mostly generate the parsing and validation routines from high-level specifications; however, we note that this approach can be applied to the native validation code, and an interesting research direction is to certify the compilation process. In addition, we argue any (potential) bug in a Zero Trust validation key policy or its implementation would be easier to patch by updating the policy key, regardless of the variety of application implementations. Furthermore, after Zero Trust’s validation server key generation phase, if the generation is done honestly, there is no longer any secret associated with the Zero Trust keys, so they cannot be dynamically compromised.

The fixed code used by the Zero Trust verifier itself constitutes a smaller trusted computing base, used in a uniform manner, independent of the actual certificate contents. However, it also means that implementation- specific checks that depend on non-public values of the certificate are no longer possible.

The Zero Trust validation server supports the RSA PKCS#1v1.1 signature verification algorithm on keys of up to 2048 bits, coupled with the SHA1 and SHA256 hash functions. This combination of algorithms is sufficient to validate over 95% of recently issued certificate chains on the Web, according to recent PKI measurement studies. We assume all RSA certificates use the public exponent e = 65537, the only choice in practice. Our Zero Trust Certificate System supports several different cipher suites with the RSA key exchange: AES and SHA-256 Message Authentication.

Advanced Encryption Standard (AES) ciphers have a fixed block size of 128-bits, and the keys can be either 128-bit or 256-bit. … These cipher suites are FIPS-compliant. For example, If you want more security, RSA does not scale well — you have to increase the RSA modulus size far faster than the ECDSA curve size. 1024 bit RSA keys are obsolete, 2048 are the current standard size. If you need to go farther, you’d stuck. First, if CA does not provide 4096 bit RSA keychain, signing your own 4096 bit RSA key with a 2048 RSA intermediary doesn’t make sense. Second, note that every doubling of an RSA private key degrades TLS handshake performance approximately by 6–7 times. So, if you need more security, choose ECC. Our Zero Trust model employs forward secrecy. Forward secrecy means that if a private key is compromised, past messages, which are send cannot also be decrypted. Thus it is beneficial to have perfect forward secrecy for your security (PFS).

For ECC (elliptic curve cryptography), given two rational points on an elliptic curve, the line through them will almost always intersect the curve at one additional point, which will again have rational coordinates. It’s easy to use two rational points to generate a third, but it’s hard to do the reverse — to take one rational point and find two rational points that would generate it via the straight-line method. This is what makes elliptic curves so useful for cryptography: Operations that are easy to do but hard to undo are fundamental to cryptographic security. This document requires the Navy to harden their web server SSL ciphers for example. Important to note: If you connect using SSL or TLS, you can send mail to anyone with Connect to on port 465, if you’re using SSL. (I would connect on port 587 if I’m using TLS.) Sign in with a Google username and password for authentication to connect with SSL or TLS. Before anyone starts worrying that they need to replace their existing SSL Certificates with TLS Certificates, it’s important to note that certificates are not dependent on protocols. That is, you don’t need to use a TLS Certificate vs. an SSL Certificate. While many vendors tend to use the phrase “SSL/TLS Certificate”, it may be more accurate to call them “Certificates for use with SSL and TLS”, since the protocols are determined by your server configuration, not the certificates themselves.

This document supports TLS 1.3 is huge step forward for web security and performance. TLS 1.3 embraces the “less is more” philosophy, removing support for older broken forms of cryptography. That means you can’t turn on the potentially vulnerable stuff, even if you try. The list of TLS 1.2 features that have been removed is extensive, and most of the exiled features have been associated with high profile attacks. These include:

  • RSA key transport — Doesn’t provide forward secrecy
  • CBC mode ciphers — Responsible for BEAST, and Lucky 13
  • RC4 stream cipher — Not secure for use in HTTPS
  • SHA-1 hash function — Deprecated in favor of SHA-2
  • Arbitrary Diffie-Hellman groups — CVE-2016-0701
  • Export ciphers — Responsible for FREAK and LogJam

The difference between ECDHE/DHE (Diffie-Hellman) and ECDH (Elliptic- curve Diffie–Hellman) is that for ECDH one key for the duration of the SSL session is used (which can be used for authentication) while with ECDHE/DHE a distinct key for every exchange is used. The difference between DHE and ECDH in two bullet points:

  • DHE uses modular arithmetic to compute the shared secret.
  • ECDH is like DHE but in addition, uses algebraic curves to generate keys (An elliptic curve is a type of algebraic curve).

Image compliments of Jope W Bos

ECDHE suites use elliptic curve Diffie-Hellman key exchange, where DHE suites use normal Diffie-Hellman. This exchange is signed with RSA, in the same way in both cases. The main advantage of ECDHE is that it is significantly faster than DHE.

A key exchange scheme consists of two algorithms:

  • A key generation algorithms, which randomly selects a keypair;
  • A key exchange algorithm, which takes as input your private key and the remote party’s public key, and outputs a shared secret.

A signature scheme is a triple of algorithms:

  • A key generation algorithm, which randomly selects a keypair.
  • A signing algorithm, that takes a message and a private key and outputs a signature.
  • A verification algorithm, which takes a message, a signature and a public key, and outputs a Boolean indicating whether the combination is valid.

Image compliments of Jope W Bos

To perform an authenticated ephemeral key exchange, the parties must agree on a key exchange scheme and a signature scheme, and must have each other’s authenticated signature public key. Then:

  1. Both parties generate their own ephemeral key exchange keypair;
  2. Both parties signs their ephemeral key exchange public key;
  3. Both parties send their ephemeral key exchange public key to the other, along with the signature of that key;
  1. Both parties check the signature on the other’s ephemeral key exchange public key, and abort if it’s invalid;
  1. Both parties now use their ephemeral key exchange private key and the other’s ephemeral key exchange public key to compute the shared secret.

This can also be accomplished by having only one of the parties sign their ephemeral key exchange public key. That’s how we normally do HTTPS, for example. The other party then doesn’t get any guarantees that the other one is who they claim they are.

You can choose any combination of signature and Diffie-Hellman algorithms for this. It doesn’t matter if the signature scheme is RSA and the key exchange scheme is ECDH. In that case step #1 uses the ECDH key generation algorithm to generate an ECDHE keypair, and then step #2 uses the RSA signing algorithm to sign that ECDHE public key. The signature algorithm doesn’t care that the message it’s signing is an ECDHE public key—its just data for one party to sign and then the other to verify. ECDHE is not involved in the certificate. The certificate contains a public signature key, metadata describing its owner, and signatures to help the recipient authenticate that the metadata is accurate. The most popular signature algorithm used in certificates is RSA. ECDSA is another alternative. ECDH is not relevant, because it’s not a signature algorithm.

With certificates, adding two steps at the beginning would modify the sketch algorithm:

  1. Both parties send their certificate to each other.
  2. Both parties use their PKI to verify the other’s certificate, and abort if it’s invalid.

Then the procedure continues using the certificate’s enclosed keys to sign and verify the key exchange. The Navy uses TLS. This TLS session has the following four major steps: (somewhat simplified)

  1. Client and server agree on a cipher suite, Outlook web app on TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1), so let’s use that as an example.
  2. Client forces the server to prove who it is by doing a public key signature of data provided by the client. In our example, it would use RSA.
  1. Client and server perform a key exchange to establish a session key. In our example, they’ll use ECDHE with the curve secp256r1. The server will sign the ECDHE messages with its RSA key just to prevent a man-in-the-middle attack.
  1. Once a session key has been established and both sides (and nobody else) have a copy of it, they will use this session key with a symmetric cipher for the rest of the session – in our example AES- 128-GCM.

A Navy use case might be aligned to a NetScaler appliance that supports the ECDHE cipher group on the front end and the back end. On an SDX appliance, if an SSL chip is assigned to a VPX instance, the cipher support of an MPX appliance applies. Otherwise, the normal cipher support of a VPX instance applies. What are the technical issues? The Navy admin must explicitly bind ECC curves to your existing SSL virtual servers or front-end services. The curves are bound by default to any virtual servers or front-end services that you create after the upgrade.

Based on policy our document supports ECDH key exchange avoids all known feasible cryptanalytic attacks, and modern web browsers now prefer ECDHE over the original, finite field, Diffie-Hellman. The discrete log algorithms we used to attack standard Diffie-Hellman groups do not gain as strong of an advantage from precomputation, and individual servers do not need to generate unique elliptic curves. Administrators should use 2048-bit or stronger Diffie-Hellman groups with “safe” primes.

Perfect forward secrecy is achieved by using temporary key pairs to secure each session – they are generated as needed, held in RAM during the session, and discarded after use. The “permanent” key pairs (the ones validated by a Certificate Authority) are used for identity verification, and signing the temporary keys as they are exchanged. Not for securing the session.

Since this key is not a certificate/public key, no authentication can be performed. An attacked can use their own key. Thus when using ECDHE/DHE, you should also implement client key validation on your server (2-way SSL) to provide authentication.

ECDHE and DHE give forward secrecy while ECDH does not. ECDHE is significantly faster than DHE. There are rumors that the NSA can break DHE keys and ECDHE keys are preferred. On other sites it is indicated DHE is more secure. The calculation used for the keys is also different. DHE is prime field Diffie Hellman. ECDHE is Elliptic Curve Diffie Hellman. ECDHE can be configured. ECDHE-ciphers must not support weak curves, e.g. less than 256 bits.

Capabilities of our Zero Trust server, client and certificate authority (required compatibility); one would choose a different cipher suite for an externally exposed website (which needs to be compatible with all major clients) than for internal security.

▪ Encryption/decryption performance

▪ Cryptographic strength; type and length of keys and hashes

▪ Required encryption features; such as prevention of replay attacks, forward secrecy

▪ Complexity of implementation; can developers and testers easily develop servers and clients supporting the cipher suite?

Our Zero Trust model secures and protects cryptographic keys in this manner:

  1. Store cryptographic keys in a secure digital vault – Move keys into a digital hardware vault with multiple layers of security wrapped around it, enforce multi-factor authentication to all users who have access to the vault. 2. Introduce role segregation – Control individual access to stored keys, preventing even the most privileged administrators from getting to them unless explicit permissions have been granted. 3. Enable secure application access – Enable access to stored keys for authorized applications and verify that the applications are legitimate. 4. Audit and review access key activity – Audit all activity related to key access and implement trigger events to alert the necessary individuals of any key activity. 5. Enforce workflow approvals – Enforce workflow approvals for anything considered being highly sensitive and the same goes for accessing the keys.
  2. Monitor cryptocurrency administrator activities Facilitate connections – similar to an automated secure proxy/jump host – to target systems that are used to perform cryptocurrency administrator activities (e.g. the system hosting in hardware, e.g., embedded secure element (eSE) or wallet). The DoD probably won’t adhere to cryptocurrency but the current trend is that cryptocurrencies use cryptography for three main purposes; to secure transactions, to control the creation of additional units, and to verify the transfer of assets. 7. Protect every application by defining policies that limit access only to users and devices that meet your organization’s risk tolerance levels. Define, with fine granularity, which users and which devices can access what applications under which circumstances. 8. Protect every application by defining policies that limit access only to users and devices that meet your organization’s risk tolerance levels. Define, with fine granularity, which users and which devices can access what applications under which circumstances.

For example, a bitcoin address is created from an ECDSA keypair. It is common to use a hashed version of the public key as the shared address, but the original bitcoin implementation also allowed for using the unaltered key directly. (which is revealed when the coins in the address is spent) The purpose of public key cryptography for our use case is that the owner can prove ownership of the address by a digital signature, which is required by the blockchain before accepting spending of the coins in an address. The ECDSA algorithm is not suited for encrypting messages. If an RSA keypair was used it would allow the sender of money to encrypt and convey some personal information to the receiver (e.g., by a public message server), which obviously could be a useful feature. Is there any reason why an RSA keypair should or could not be used for cryptocurrency addresses? We propose use a Schnorr-style signature scheme, such as Ed25519, with size and computational cost comparable to the ECDSA over secp256k1 as used by Bitcoin, and then reuse the key

pairs for ECDH/ECIES. XEdDSA enables use of a single key pair format for both elliptic curve Diffie-Hellman and signatures. In some situations it enables using the same key pair for both algorithms.

If quantum computers were built, they would pose concerns for public key cryptography, as we know it. Among other cryptographic techniques, they would jeopardize the use of PKI X.509 certificates (RSA, ECDSA) used today for authentication. Even though post-quantum signatures could work well for some use cases like software signing, there are concerns about the effect their size and processing cost would have on technologies using X.509 certificates. In this work, we investigated the viability of post-quantum signatures in X.509 certificates and protocols that use them (e.g. TLS, IKEv2). We will propose to the Navy a pilot to evaluate existing mechanisms built-in the protocols that deal with large records like record fragmentation, segmentation, caching, and compression. We think that it is not rare that large X.509 certificates and certificate chains need to be transferred over UDP as part of the IKEv2 peer authentication. Thus, fragmentation is already widely used on the Internet today in order to carry lengthy certificates that do not fit in the path MTU. Thus, it is straightforward to add support for new proposed post-quantum signature schemes in X.509 when necessary by defining new algorithm identifiers (that correspond to certain post-quantum signature scheme parameters and structures).

X.509 revocation checking can take one of two forms: revocation lists (CRL) must be downloaded out of band, while the online certificate status protocol (OCSP) can either be queried by the validator to obtain a signed proof of non-revocation; or this proof may be stapled to the certificate to prevent a high-latency query. The DISA CA requires delegation. Many practical Navy applications rely on some form of authentication delegation. In particular, many Navy servers delegate the delivery of their web content-to-content delivery networks (CDNs). Websites that use HTTPS with a CDN need to give their X.509 credentials to the CDN provider, which can cause serious attacks when CDNs improperly manage customer credentials. This proposes to reflect the authentication delegation of HTTPS content delivery networks as X.509 delegation. Unfortunately, this is impractical, because it requires an extension of X.509, which the DoD CA is unlikely to implement, as it is detrimental to their DoD CIO policy. Our Zero Trust validation approach allows a content-owner (Navy) to implement secure X.509 delegation to CDNs using short-lived pseudo-certificates, without the DoD CA’s cooperation.

Several trust anchors are involved to authorize the binding of public keys with user identities. User identities are unique within each CA domain and third party Validation Authorities (VAs) can provide a validation service on behalf of the CA. Registration Authorities (RAs), Certificate Revocation