CryptoDB
Papers from TCC 2019
Year
Venue
Title
2019
TCC
(Pseudo) Random Quantum States with Binary Phase
Abstract
We prove a quantum information-theoretic conjecture due to Ji, Liu and Song (CRYPTO 2018) which suggested that a uniform superposition with random binary phase is statistically indistinguishable from a Haar random state. That is, any polynomial number of copies of the aforementioned state is within exponentially small trace distance from the same number of copies of a Haar random state.As a consequence, we get a provable elementary construction of pseudorandom quantum states from post-quantum pseudorandom functions. Generating pseudorandom quantum states is desirable for physical applications as well as for computational tasks such as quantum money. We observe that replacing the pseudorandom function with a (2t)-wise independent function (either in our construction or in previous work), results in an explicit construction for quantum state t-designs for all t. In fact, we show that the circuit complexity (in terms of both circuit size and depth) of constructing t-designs is bounded by that of (2t)-wise independent functions. Explicitly, while in prior literature t-designs required linear depth (for $$t > 2$$), this observation shows that polylogarithmic depth suffices for all t.We note that our constructions yield pseudorandom states and state designs with only real-valued amplitudes, which was not previously known. Furthermore, generating these states require quantum circuit of restricted form: applying one layer of Hadamard gates, followed by a sequence of Toffoli gates. This structure may be useful for efficiency and simplicity of implementation.
2019
TCC
A Black-Box Construction of Fully-Simulatable, Round-Optimal Oblivious Transfer from Strongly Uniform Key Agreement
Abstract
We show how to construct maliciously secure oblivious transfer (M-OT) from a strengthening of key agreement (KA) which we call strongly uniform KA (SU-KA), where the latter roughly means that the messages sent by one party are computationally close to uniform, even if the other party is malicious. Our transformation is black-box, almost round preserving (adding only a constant overhead of up to two rounds), and achieves standard simulation-based security in the plain model.As we show, 2-round SU-KA can be realized from cryptographic assumptions such as low-noise LPN, high-noise LWE, Subset Sum, DDH, CDH and RSA—all with polynomial hardness—thus yielding a black-box construction of fully-simulatable, round-optimal, M-OT from the same set of assumptions (some of which were not known before).
2019
TCC
A Unified and Composable Take on Ratcheting
Abstract
Ratcheting, an umbrella term for certain techniques for achieving secure messaging with strong guarantees, has spurred much interest in the cryptographic community, with several novel protocols proposed as of lately. Most of them are composed from several sub-protocols, often sharing similar ideas across different protocols. Thus, one could hope to reuse the sub-protocols to build new protocols achieving different security, efficiency, and usability trade-offs. This is especially desirable in view of the community’s current aim for group messaging, which has a significantly larger design space. However, the underlying ideas are usually not made explicit, but rather implicitly encoded in a (fairly complex) security game, primarily targeted at the overall security proof. This not only hinders modular protocol design, but also makes the suitability of a protocol for a particular application difficult to assess.In this work we demonstrate that ratcheting components can be modeled in a composable framework, allowing for their reuse in a modular fashion. To this end, we first propose an extension of the Constructive Cryptography framework by so-called global event histories, to allow for a clean modularization even if the component modules are not fully independent but actually subtly intertwined, as in most ratcheting protocols. Second, we model a unified, flexibly instantiable type of strong security statement for secure messaging within that framework. Third, we show that one can phrase strong guarantees for a number of sub-protocols from the existing literature in this model with only minor modifications, slightly stronger assumptions, and reasonably intuitive formalizations.When expressing existing protocols’ guarantees in a simulation-based framework, one has to address the so-called commitment problem. We do so by reflecting the removal of access to certain oracles under specific conditions, appearing in game-based security definitions, in the real world of our composable statements. We also propose a novel non-committing protocol for settings where the number of messages a party can send before receiving a reply is bounded.
2019
TCC
Adaptively Secure Garbling Schemes for Parallel Computations
Abstract
We construct the first adaptively secure garbling scheme based on standard public-key assumptions for garbling a circuit
$$C: \{0, 1\}^n \mapsto \{0, 1\}^m$$
that simultaneously achieves a near-optimal online complexity
$$n + m + \textsf {poly} (\lambda , \log |C|)$$
(where
$$\lambda $$
is the security parameter) and preserves the parallel efficiency for evaluating the garbled circuit; namely, if the depth of C is d, then the garbled circuit can be evaluated in parallel time
$$d \cdot \textsf {poly} (\log |C|, \lambda )$$
. In particular, our construction improves over the recent seminal work of [GS18], which constructs the first adaptively secure garbling scheme with a near-optimal online complexity under the same assumptions, but the garbled circuit can only be evaluated gate by gate in a sequential manner. Our construction combines their novel idea of linearization with several new ideas to achieve parallel efficiency without compromising online complexity.We take one step further to construct the first adaptively secure garbling scheme for parallel RAM (PRAM) programs under standard assumptions that preserves the parallel efficiency. Previous such constructions we are aware of is from strong assumptions like indistinguishability obfuscation. Our construction is based on the work of [GOS18] for adaptively secure garbled RAM, but again introduces several new ideas to handle parallel RAM computation, which may be of independent interests. As an application, this yields the first constant round secure computation protocol for persistent PRAM programs in the malicious settings from standard assumptions.
2019
TCC
Algebraically Structured LWE, Revisited
Abstract
In recent years, there has been a proliferation of algebraically structured Learning With Errors (LWE) variants, including Ring-LWE, Module-LWE, Polynomial-LWE, Order-LWE, and Middle-Product LWE, and a web of reductions to support their hardness, both among these problems themselves and from related worst-case problems on structured lattices. However, these reductions are often difficult to interpret and use, due to the complexity of their parameters and analysis, and most especially their (frequently large) blowup and distortion of the error distributions.In this paper we unify and simplify this line of work. First, we give a general framework that encompasses all proposed LWE variants (over commutative base rings), and in particular unifies all prior “algebraic” LWE variants defined over number fields. We then use this framework to give much simpler, more general, and tighter reductions from Ring-LWE to other algebraic LWE variants, including Module-LWE, Order-LWE, and Middle-Product LWE. In particular, all of our reductions have easy-to-analyze and frequently small error expansion; in some cases they even leave the error unchanged. A main message of our work is that it is straightforward to use the hardness of the original Ring-LWE problem as a foundation for the hardness of all other algebraic LWE problems defined over number fields, via simple and rather tight reductions.
2019
TCC
Attribute Based Encryption for Deterministic Finite Automata from $\mathsf{DLIN}$
Abstract
Waters [Crypto, 2012] provided the first attribute based encryption scheme ABE for Deterministic Finite Automata (DFA) from a parametrized or “q-type” assumption over bilinear maps. Obtaining a construction from static assumptions has been elusive, despite much progress in the area of ABE.In this work, we construct the first attribute based encryption scheme for DFA from static assumptions on pairings, namely, the $$\mathsf{DLIN}$$ assumption. Our scheme supports unbounded length inputs, unbounded length machines and unbounded key requests. In more detail, secret keys in our construction are associated with a DFA M of unbounded length, ciphertexts are associated with a tuple $$(\mathbf {x}, \mathsf {\mu })$$ where $$\mathbf {x}$$ is a public attribute of unbounded length and $$\mathsf {\mu }$$ is a secret message bit, and decryption recovers $$\mathsf {\mu }$$ if and only if $$M(\mathbf {x})=1$$.Our techniques are at least as interesting as our final result. We present a simple compiler that combines constructions of unbounded ABE schemes for monotone span programs (MSP) in a black box way to construct ABE for DFA. In more detail, we find a way to embed DFA computation into monotone span programs, which lets us compose existing constructions (modified suitably) of unbounded key-policy ABE ($${\mathsf {kpABE}}$$) and unbounded ciphertext-policy ABE ($${\mathsf {cpABE}}$$) for MSP in a simple and modular way to obtain key-policy ABE for DFA. Our construction uses its building blocks in a symmetric way – by swapping the use of the underlying $${\mathsf {kpABE}}$$ and $${\mathsf {cpABE}}$$, we also obtain a construction of ciphertext-policy ABE for DFA.Our work extends techniques developed recently by Agrawal, Maitra and Yamada [Crypto 2019], which show how to construct ABE that support unbounded machines and unbounded inputs by combining ABE schemes that are bounded in one co-ordinate. At the heart of our work is the observation that unbounded, multi-use ABE for MSP already achieve most of what we need to build ABE for DFA.
2019
TCC
Channels of Small Log-Ratio Leakage and Characterization of Two-Party Differentially Private Computation
Abstract
Consider a ppt two-party protocol $$\varPi = (\mathsf {A} ,\mathsf {B} )$$ in which the parties get no private inputs and obtain outputs $$O^{\mathsf {A} },O^{\mathsf {B} }\in \left\{ 0,1\right\} $$, and let $$V^\mathsf {A} $$ and $$V^\mathsf {B} $$ denote the parties’ individual views. Protocol $$\varPi $$ has $$\alpha $$-agreement if $$\Pr [O^{\mathsf {A} }=O^{\mathsf {B} }] = \tfrac{1}{2}+\alpha $$. The leakage of $$\varPi $$ is the amount of information a party obtains about the event $$\left\{ O^{\mathsf {A} }=O^{\mathsf {B} }\right\} $$; that is, the leakage$$\epsilon $$ is the maximum, over $$\mathsf {P} \in \left\{ \mathsf {A} ,\mathsf {B} \right\} $$, of the distance between $$V^\mathsf {P} |_{O^{\mathsf {A} }= O^{\mathsf {B} }}$$ and $$V^\mathsf {P} |_{O^{\mathsf {A} }\ne O^{\mathsf {B} }}$$. Typically, this distance is measured in statistical distance, or, in the computational setting, in computational indistinguishability. For this choice, Wullschleger [TCC ’09] showed that if $$\epsilon \ll \alpha $$ then the protocol can be transformed into an OT protocol.We consider measuring the protocol leakage by the log-ratio distance (which was popularized by its use in the differential privacy framework). The log-ratio distance between X, Y over domain $$\varOmega $$ is the minimal $$\epsilon \ge 0$$ for which, for every $$v \in \varOmega $$, $$\log \frac{\Pr [X=v]}{\Pr [Y=v]} \in [-\epsilon ,\epsilon ]$$. In the computational setting, we use computational indistinguishability from having log-ratio distance $$\epsilon $$. We show that a protocol with (noticeable) accuracy $$\alpha \in \varOmega (\epsilon ^2)$$ can be transformed into an OT protocol (note that this allows $$\epsilon \gg \alpha $$). We complete the picture, in this respect, showing that a protocol with $$\alpha \in o(\epsilon ^2)$$ does not necessarily imply OT. Our results hold for both the information theoretic and the computational settings, and can be viewed as a “fine grained” approach to “weak OT amplification”.We then use the above result to fully characterize the complexity of differentially private two-party computation for the XOR function, answering the open question put by Goyal, Khurana, Mironov, Pandey, and Sahai, [ICALP ’16] and Haitner, Nissim, Omri, Shaltiel, and Silbak [22] [FOCS ’18]. Specifically, we show that for any (noticeable) $$\alpha \in \varOmega (\epsilon ^2)$$, a two-party protocol that computes the XOR function with $$\alpha $$-accuracy and $$\epsilon $$-differential privacy can be transformed into an OT protocol. This improves upon Goyal et al. that only handle $$\alpha \in \varOmega (\epsilon )$$, and upon Haitner et al. who showed that such a protocol implies (infinitely-often) key agreement (and not OT). Our characterization is tight since OT does not follow from protocols in which $$\alpha \in o( \epsilon ^2)$$, and extends to functions (over many bits) that “contain” an “embedded copy” of the XOR function.
2019
TCC
Characterizing Collision and Second-Preimage Resistance in Linicrypt
Abstract
Linicrypt (Carmer & Rosulek, Crypto 2016) refers to the class of algorithms that make calls to a random oracle and otherwise manipulate values via fixed linear operations. We give a characterization of collision-resistance and second-preimage resistance for a significant class of Linicrypt programs (specifically, those that achieve domain separation on their random oracle queries via nonces). Our characterization implies that collision-resistance and second-preimage resistance are equivalent, in an asymptotic sense, for this class. Furthermore, there is a polynomial-time procedure for determining whether such a Linicrypt program is collision/second-preimage resistant.
2019
TCC
Composable and Finite Computational Security of Quantum Message Transmission
Abstract
Recent research in quantum cryptography has led to the development of schemes that encrypt and authenticate quantum messages with computational security. The security definitions used so far in the literature are asymptotic, game-based, and not known to be composable. We show how to define finite, composable, computational security for secure quantum message transmission. The new definitions do not involve any games or oracles, they are directly operational: a scheme is secure if it transforms an insecure channel and a shared key into an ideal secure channel from Alice to Bob, i.e., one which only allows Eve to block messages and learn their size, but not change them or read them. By modifying the ideal channel to provide Eve with more or less capabilities, one gets an array of different security notions. By design these transformations are composable, resulting in composable security.Crucially, the new definitions are finite. Security does not rely on the asymptotic hardness of a computational problem. Instead, one proves a finite reduction: if an adversary can distinguish the constructed (real) channel from the ideal one (for some fixed security parameters), then she can solve a finite instance of some computational problem. Such a finite statement is needed to make security claims about concrete implementations.We then prove that (slightly modified versions of) protocols proposed in the literature satisfy these composable definitions. And finally, we study the relations between some game-based definitions and our composable ones. In particular, we look at notions of quantum authenticated encryption and $$\mathsf{QCCA2}$$, and show that they suffer from the same issues as their classical counterparts: they exclude certain protocols which are arguably secure.
2019
TCC
Compressible FHE with Applications to PIR
Abstract
Homomorphic encryption (HE) is often viewed as impractical, both in communication and computation. Here we provide an additively homomorphic encryption scheme based on (ring) LWE with nearly optimal rate ($$1-\epsilon $$ for any $$\epsilon >0$$). Moreover, we describe how to compress many Gentry-Sahai-Waters (GSW) ciphertexts (e.g., ciphertexts that may have come from a homomorphic evaluation) into (fewer) high-rate ciphertexts.Using our high-rate HE scheme, we are able for the first time to describe a single-server private information retrieval (PIR) scheme with sufficiently low computational overhead so as to be practical for large databases. Single-server PIR inherently requires the server to perform at least one bit operation per database bit, and we describe a rate-(4/9) scheme with computation which is not so much worse than this inherent lower bound. In fact it is probably less than whole-database AES encryption – specifically about 2.3 mod-q multiplication per database byte, where q is about 50 to 60 bits. Asymptotically, the computational overhead of our PIR scheme is $$\tilde{O}(\log \log \mathsf {\lambda }+ \log \log \log N)$$, where $$\mathsf {\lambda }$$ is the security parameter and N is the number of database files, which are assumed to be sufficiently large.
2019
TCC
Continuously Non-malleable Secret Sharing for General Access Structures
Abstract
We study leakage-resilient continuously non-malleable secret sharing, as recently introduced by Faonio and Venturi (CRYPTO 2019). In this setting, an attacker can continuously tamper and leak from a target secret sharing of some message, with the goal of producing a modified set of shares that reconstructs to a message related to the originally shared value. Our contributions are two fold.
In the plain model, assuming one-to-one one-way functions, we show how to obtain noisy-leakage-resilient continuous non-malleability for arbitrary access structures, in case the attacker can continuously leak from and tamper with all of the shares independently.In the common reference string model, we show how to obtain a new flavor of security which we dub bounded-leakage-resilient continuous non-malleability under selective $$k$$-partitioning. In this model, the attacker is allowed to partition the target $$n$$ shares into any number of non-overlapping blocks of maximal size $$k$$, and then can continuously leak from and tamper with the shares within each block jointly. Our construction works for arbitrary access structures, and assuming (doubly enhanced) trapdoor permutations and collision-resistant hash functions, we achieve a concrete instantiation for $$k\in O(\log n)$$.
Prior to our work, there was no secret sharing scheme achieving continuous non-malleability against joint tampering, and the only known scheme for independent tampering was tailored to threshold access structures.
2019
TCC
CPA-to-CCA Transformation for KDM Security
Abstract
We show that chosen plaintext attacks (CPA) security is equivalent to chosen ciphertext attacks (CCA) security for key-dependent message (KDM) security. Concretely, we show how to construct a public-key encryption (PKE) scheme that is KDM-CCA secure with respect to all functions computable by circuits of a-priori bounded size, based only on a PKE scheme that is KDM-CPA secure with respect to projection functions. Our construction works for KDM security in the single user setting.Our main result is achieved by combining the following two steps. First, we observe that by combining the results and techniques from the recent works by Lombardi et al. (CRYPTO 2019), and by Kitagawa et al. (CRYPTO 2019), we can construct a reusable designated-verifier non-interactive zero-knowledge (DV-NIZK) argument system based on an IND-CPA secure PKE scheme and a secret-key encryption (SKE) scheme satisfying one-time KDM security with respect to projection functions. This observation leads to the first reusable DV-NIZK argument system under the learning-parity-with-noise (LPN) assumption. Then, as the second and main technical step, we show a generic construction of a KDM-CCA secure PKE scheme using an IND-CPA secure PKE scheme, a reusable DV-NIZK argument system, and an SKE scheme satisfying one-time KDM security with respect to projection functions. Since the classical Naor-Yung paradigm (STOC 1990) with a DV-NIZK argument system does not work for proving KDM security, we propose a new construction methodology to achieve this generic construction.Moreover, we show how to extend our generic construction and achieve KDM-CCA security in the multi-user setting, by additionally requiring the underlying SKE scheme in our generic construction to satisfy a weak form of KDM security against related-key attacks (RKA-KDM security) instead of one-time KDM security. From this extension, we obtain the first KDM-CCA secure PKE schemes in the multi-user setting under the CDH or LPN assumption.
2019
TCC
Delegating Quantum Computation in the Quantum Random Oracle Model
Abstract
A delegation scheme allows a computationally weak client to use a server’s resources to help it evaluate a complex circuit without leaking any information about the input (other than its length) to the server. In this paper, we consider delegation schemes for quantum circuits, where we try to minimize the quantum operations needed by the client. We construct a new scheme for delegating a large circuit family, which we call “C+P circuits”. “C+P” circuits are the circuits composed of Toffoli gates and diagonal gates. Our scheme is non-interactive, requires small amount of quantum computation from the client (proportional to input length but independent of the circuit size), and can be proved secure in the quantum random oracle model, without relying on additional assumptions, such as the existence of fully homomorphic encryption. In practice the random oracle can be replaced by an appropriate hash function or block cipher, for example, SHA-3, AES.This protocol allows a client to delegate the most expensive part of some quantum algorithms, for example, Shor’s algorithm. The previous protocols that are powerful enough to delegate Shor’s algorithm require either many client side quantum operations or the existence of FHE. The protocol requires asymptotically fewer quantum gates on the client side compared to running Shor’s algorithm locally.To hide the inputs, our scheme uses an encoding that maps one input qubit to multiple qubits. We then provide a novel generalization of classical garbled circuits (“reversible garbled circuits”) to allow the computation of Toffoli circuits on this encoding. We also give a technique that can support the computation of phase gates on this encoding.To prove the security of this protocol, we study key dependent message (KDM) security in the quantum random oracle model. KDM security was not previously studied in quantum settings.
2019
TCC
Efficient Information-Theoretic Secure Multiparty Computation over $\mathbb {Z}/p^k\mathbb {Z}$ via Galois Rings
Abstract
At CRYPTO 2018, Cramer et al. introduced a secret-sharing based protocol called SPD$$\mathbb {Z}_{2^k}$$ that allows for secure multiparty computation (MPC) in the dishonest majority setting over the ring of integers modulo $$2^k$$, thus solving a long-standing open question in MPC about secure computation over rings in this setting. In this paper we study this problem in the information-theoretic scenario. More specifically, we ask the following question: Can we obtain information-theoretic MPC protocols that work over rings with comparable efficiency to corresponding protocols over fields? We answer this question in the affirmative by presenting an efficient protocol for robust Secure Multiparty Computation over $$\mathbb {Z}/p^{k}\mathbb {Z}$$ (for any prime p and positive integer k) that is perfectly secure against active adversaries corrupting a fraction of at most 1/3 players, and a robust protocol that is statistically secure against an active adversary corrupting a fraction of at most 1/2 players.
2019
TCC
Efficient Private PEZ Protocols for Symmetric Functions
Abstract
A private PEZ protocol is a variant of secure multi-party computation performed using a (long) PEZ dispenser. The original paper by Balogh et al. presented a private PEZ protocol for computing an arbitrary function with n inputs. This result is interesting, but no follow-up work has been presented since then, to the best of our knowledge. We show herein that it is possible to shorten the initial string (the sequence of candies filled in a PEZ dispenser) and the number of moves (a player pops out a specified number of candies in each move) drastically if the function is symmetric. Concretely, it turns out that the length of the initial string is reduced from $$\mathcal {O}(2^n!)$$ for general functions in Balogh et al.’s results to $$\mathcal {O}(n\cdot n!)$$ for symmetric functions, and $$2^n$$ moves for general functions are reduced to $$n^2$$ moves for symmetric functions. Our main idea is to utilize the recursive structure of symmetric functions to construct the protocol recursively. This idea originates from a new initial string we found for a private PEZ protocol for the three-input majority function, which is different from the one with the same length given by Balogh et al. without describing how they derived it.
2019
TCC
Estimating Gaps in Martingales and Applications to Coin-Tossing: Constructions and Hardness
Abstract
Consider the representative task of designing a distributed coin-tossing protocol for n processors such that the probability of heads is $$X_0\in [0,1]$$. This protocol should be robust to an adversary who can reset one processor to change the distribution of the final outcome. For $$X_0=1/2$$, in the information-theoretic setting, no adversary can deviate the probability of the outcome of the well-known Blum’s “majority protocol” by more than $$\frac{1}{\sqrt{2\pi n}}$$, i.e., it is $$\frac{1}{\sqrt{2\pi n}}$$ insecure.In this paper, we study discrete-time martingales $$(X_0,X_1,\dotsc ,X_n)$$ such that $$X_i\in [0,1]$$, for all $$i\in \{0,\dotsc ,n\}$$, and $$X_n\in {\{0,1\}} $$. These martingales are commonplace in modeling stochastic processes like coin-tossing protocols in the information-theoretic setting mentioned above. In particular, for any $$X_0\in [0,1]$$, we construct martingales that yield $$\frac{1}{2}\sqrt{\frac{X_0(1-X_0)}{n}}$$ insecure coin-tossing protocols. For $$X_0=1/2$$, our protocol requires only 40% of the processors to achieve the same security as the majority protocol.The technical heart of our paper is a new inductive technique that uses geometric transformations to precisely account for the large gaps in these martingales. For any $$X_0\in [0,1]$$, we show that there exists a stopping time $$\tau $$ such that The inductive technique simultaneously constructs martingales that demonstrate the optimality of our bound, i.e., a martingale where the gap corresponding to any stopping time is small. In particular, we construct optimal martingales such that any stopping time $$\tau $$ has Our lower-bound holds for all $$X_0\in [0,1]$$; while the previous bound of Cleve and Impagliazzo (1993) exists only for positive constant $$X_0$$. Conceptually, our approach only employs elementary techniques to analyze these martingales and entirely circumvents the complex probabilistic tools inherent to the approaches of Cleve and Impagliazzo (1993) and Beimel, Haitner, Makriyannis, and Omri (2018).By appropriately restricting the set of possible stopping-times, we present representative applications to constructing distributed coin-tossing/dice-rolling protocols, discrete control processes, fail-stop attacking coin-tossing/dice-rolling protocols, and black-box separations.
2019
TCC
From FE Combiners to Secure MPC and Back
Abstract
Cryptographic combiners allow one to combine many candidates for a cryptographic primitive, possibly based on different computational assumptions, into another candidate with the guarantee that the resulting candidate is secure as long as at least one of the original candidates is secure. While the original motivation of cryptographic combiners was to reduce trust on existing candidates, in this work, we study a rather surprising implication of combiners to constructing secure multiparty computation protocols. Specifically, we initiate the study of functional encryption combiners and show its connection to secure multiparty computation.Functional encryption (FE) has incredible applications towards computing on encrypted data. However, constructing the most general form of this primitive has remained elusive. Although some candidate constructions exist, they rely on nonstandard assumptions, and thus, their security has been questioned. An FE combiner attempts to make use of these candidates while minimizing the trust placed on any individual FE candidate. Informally, an FE combiner takes in a set of FE candidates and outputs a secure FE scheme if at least one of the candidates is secure.Another fundamental area in cryptography is secure multi-party computation (MPC), which has been extensively studied for several decades. In this work, we initiate a formal study of the relationship between functional encryption (FE) combiners and secure multi-party computation (MPC). In particular, we show implications in both directions between these primitives. As a consequence of these implications, we obtain the following main results.
A two-round semi-honest MPC protocol in the plain model secure against up to $$n-1$$ corruptions with communication complexity proportional only to the depth of the circuit being computed assuming learning with errors (LWE). Prior two round protocols based on standard assumptions that achieved this communication complexity required trust assumptions, namely, a common reference string.A functional encryption combiner based on pseudorandom generators (PRGs) in $$\mathsf {NC}^1$$. This is a weak assumption as such PRGs are implied by many concrete intractability problems commonly used in cryptography, such as ones related to factoring, discrete logarithm, and lattice problems [11]. Previous constructions of FE combiners, implicit in [7], were known only from LWE. Using this result, we build a universal construction of functional encryption: an explicit construction of functional encryption based only on the assumptions that functional encryption exists and PRGs in $$\mathsf {NC}^1$$.
2019
TCC
Fully Homomorphic NIZK and NIWI Proofs
Abstract
In this work, we define and construct fully homomorphic non-interactive zero knowledge (FH-NIZK) and non-interactive witness-indistinguishable (FH-NIWI) proof systems. We focus on the NP complete language L, where, for a boolean circuit C and a bit b, the pair $$(C,b)\in L$$ if there exists an input $$\mathbf {w}$$ such that $$C(\mathbf {w})=b$$. For this language, we call a non-interactive proof system fully homomorphic if, given instances $$(C_i,b_i)\in L$$ along with their proofs $$\varPi _i$$, for $$i\in \{1,\ldots ,k\}$$, and given any circuit $$D:\{0,1\}^k\rightarrow \{0,1\}$$, one can efficiently compute a proof $$\varPi $$ for $$(C^*,b)\in L$$, where $$C^*(\mathbf {w}^{(1)},\ldots ,\mathbf {w}^{(k)})=D(C_1(\mathbf {w}^{(1)}),\ldots ,C_k(\mathbf {w}^{(k)}))$$ and $$D(b_1,\ldots ,b_k)=b$$. The key security property is unlinkability: the resulting proof $$\varPi $$ is indistinguishable from a fresh proof of the same statement. Our first result, under the Decision Linear Assumption (DLIN), is an FH-NIZK proof system for L in the common random string model. Our more surprising second result (under a new decisional assumption on groups with bilinear maps) is an FH-NIWI proof system that requires no setup.
2019
TCC
General Linear Group Action on Tensors: A Candidate for Post-quantum Cryptography
Abstract
Starting from the one-way group action framework of Brassard and Yung (Crypto’90), we revisit building cryptography based on group actions. Several previous candidates for one-way group actions no longer stand, due to progress both on classical algorithms (e.g., graph isomorphism) and quantum algorithms (e.g., discrete logarithm).We propose the general linear group action on tensors as a new candidate to build cryptography based on group actions. Recent works (Futorny–Grochow–Sergeichuk Lin. Alg. Appl., 2019) suggest that the underlying algorithmic problem, the tensor isomorphism problem, is the hardest one among several isomorphism testing problems arising from areas including coding theory, computational group theory, and multivariate cryptography. We present evidence to justify the viability of this proposal from comprehensive study of the state-of-art heuristic algorithms, theoretical algorithms, hardness results, as well as quantum algorithms.We then introduce a new notion called pseudorandom group actions to further develop group-action based cryptography. Briefly speaking, given a group G acting on a set S, we assume that it is hard to distinguish two distributions of (s, t) either uniformly chosen from $$S\times S$$, or where s is randomly chosen from S and t is the result of applying a random group action of $$g\in G$$ on s. This subsumes the classical Decisional Diffie-Hellman assumption when specialized to a particular group action. We carefully analyze various attack strategies that support instantiating this assumption by the general linear group action on tensors.Finally, we construct several cryptographic primitives such as digital signatures and pseudorandom functions. We give quantum security proofs based on the one-way group action assumption and the pseudorandom group action assumption.
2019
TCC
Incrementally Verifiable Computation via Incremental PCPs
Abstract
If I commission a long computation, how can I check that the result is correct without re-doing the computation myself? This is the question that efficient verifiable computation deals with. In this work, we address the issue of verifying the computation as it unfolds. That is, at any intermediate point in the computation, I would like to see a proof that the current state is correct. Ideally, these proofs should be short, non-interactive, and easy to verify. In addition, the proof at each step should be generated efficiently by updating the previous proof, without recomputing the entire proof from scratch. This notion, known as incrementally verifiable computation, was introduced by Valiant [TCC 08] about a decade ago. Existing solutions follow the approach of recursive proof composition and can be based on strong and non-falsifiable cryptographic assumptions (so-called “knowledge assumptions”).In this work, we present a new framework for constructing incrementally verifiable computation schemes in both the publicly verifiable and designated-verifier settings. Our designated-verifier scheme is based on somewhat homomorphic encryption (which can be based on Learning with Errors) and our publicly verifiable scheme is based on the notion of zero-testable homomorphic encryption, which can be constructed from ideal multi-linear maps [Paneth and Rothblum, TCC 17].Our framework is anchored around the new notion of a probabilistically checkable proof (PCP) with incremental local updates. An incrementally updatable PCP proves the correctness of an ongoing computation, where after each computation step, the value of every symbol can be updated locally without reading any other symbol. This update results in a new PCP for the correctness of the next step in the computation. Our primary technical contribution is constructing such an incrementally updatable PCP. We show how to combine updatable PCPs with recently suggested (ordinary) verifiable computation to obtain our results.
2019
TCC
Interactive Non-malleable Codes
Abstract
Non-malleable codes (NMC) introduced by Dziembowski et al. [ICS’10] allow one to encode “passive” data in such a manner that when a codeword is tampered, the original data either remains completely intact or is essentially destroyed.In this work, we initiate the study of interactive non-malleable codes (INMCs) that allow for encoding “active communication” rather than passive data. An INMC allows two parties to engage in an interactive protocol such that an adversary who is able to tamper with the protocol messages either leaves the original transcript intact (i.e., the parties are able to reconstruct the original transcript) or the transcript is completely destroyed and replaced with an unrelated one.We formalize a tampering model for interactive protocols and put forward the notion of INMCs. Since constructing INMCs for general adversaries is impossible (as in the case of non-malleable codes), we construct INMCs for several specific classes of tampering functions. These include bounded state, split state, and fragmented sliding window tampering functions. We also obtain lower bounds for threshold tampering functions via a connection to interactive coding. All of our results are unconditional.
2019
TCC
Is Information-Theoretic Topology-Hiding Computation Possible?
Abstract
Topology-hiding computation (THC) is a form of multi-party computation over an incomplete communication graph that maintains the privacy of the underlying graph topology. Existing THC protocols consider an adversary that may corrupt an arbitrary number of parties, and rely on cryptographic assumptions such as DDH.In this paper we address the question of whether information-theoretic THC can be achieved by taking advantage of an honest majority. In contrast to the standard MPC setting, this problem has remained open in the topology-hiding realm, even for simple “privacy-free” functions like broadcast, and even when considering only semi-honest corruptions.We uncover a rich landscape of both positive and negative answers to the above question, showing that what types of graphs are used and how they are selected is an important factor in determining the feasibility of hiding topology information-theoretically. In particular, our results include the following.
We show that topology-hiding broadcast (THB) on a line with four nodes, secure against a single semi-honest corruption, implies key agreement. This result extends to broader classes of graphs, e.g., THB on a cycle with two semi-honest corruptions.On the other hand, we provide the first feasibility result for information-theoretic THC: for the class of cycle graphs, with a single semi-honest corruption.
Given the strong impossibilities, we put forth a weaker definition of distributional-THC, where the graph is selected from some distribution (as opposed to worst-case).
We present a formal separation between the definitions, by showing a distribution for which information theoretic distributional-THC is possible, but even topology-hiding broadcast is not possible information-theoretically with the standard definition.We demonstrate the power of our new definition via a new connection to adaptively secure low-locality MPC, where distributional-THC enables parties to “reuse” a secret low-degree communication graph even in the face of adaptive corruptions.
2019
TCC
Lattice Trapdoors and IBE from Middle-Product LWE
Abstract
Middle-product learning with errors (MP-LWE) was recently introduced by Rosca, Sakzad, Steinfeld and Stehlé (CRYPTO 2017) as a way to combine the efficiency of Ring-LWE with the more robust security guarantees of plain LWE. While Ring-LWE is at the heart of efficient lattice-based cryptosystems, it involves the choice of an underlying ring which is essentially arbitrary. In other words, the effect of this choice on the security of Ring-LWE is poorly understood. On the other hand, Rosca et al. showed that a new LWE variant, called MP-LWE, is as secure as Polynomial-LWE (another variant of Ring-LWE) over any of a broad class of number fields. They also demonstrated the usefulness of MP-LWE by constructing an MP-LWE based public-key encryption scheme whose efficiency is comparable to Ring-LWE based public-key encryption. In this work, we take this line of research further by showing how to construct Identity-Based Encryption (IBE) schemes that are secure under a variant of the MP-LWE assumption. Our IBE schemes match the efficiency of Ring-LWE based IBE, including a scheme in the random oracle model with keys and ciphertexts of size $$\tilde{O}(n)$$ (for n-bit identities).We construct our IBE scheme following the lattice trapdoors paradigm of [Gentry, Peikert, and Vaikuntanathan, STOC’08]; our main technical contributions are introducing a new leftover hash lemma and instantiating a new variant of lattice trapdoors compatible with MP-LWE.This work demonstrates that the efficiency/security tradeoff gains of MP-LWE can be extended beyond public-key encryption to more complex lattice-based primitives.
2019
TCC
Leveraging Linear Decryption: Rate-1 Fully-Homomorphic Encryption and Time-Lock Puzzles
Abstract
We show how to combine a fully-homomorphic encryption scheme with linear decryption and a linearly-homomorphic encryption schemes to obtain constructions with new properties. Specifically, we present the following new results.
(1)Rate-1 Fully-Homomorphic Encryption: We construct the first scheme with message-to-ciphertext length ratio (i.e., rate) $$1-\sigma $$ for $$\sigma = o(1)$$. Our scheme is based on the hardness of the Learning with Errors (LWE) problem and $$\sigma $$ is proportional to the noise-to-modulus ratio of the assumption. Our building block is a construction of a new high-rate linearly-homomorphic encryption.One application of this result is the first general-purpose secure function evaluation protocol in the preprocessing model where the communication complexity is within additive factor of the optimal insecure protocol.(2)Fully-Homomorphic Time-Lock Puzzles: We construct the first time-lock puzzle where one can evaluate any function over a set of puzzles without solving them, from standard assumptions. Prior work required the existence of sub-exponentially hard indistinguishability obfuscation.
2019
TCC
Linear-Size Constant-Query IOPs for Delegating Computation
Abstract
We study the problem of delegating computations via interactive proofs that can be probabilistically checked. Known as interactive oracle proofs (IOPs), these proofs extend probabilistically checkable proofs (PCPs) to multi-round protocols, and have received much attention due to their application to constructing cryptographic proofs (such as succinct non-interactive arguments). The relevant complexity measures for IOPs in this context are prover and verifier time, and query complexity.We construct highly efficient IOPs for a rich class of nondeterministic algebraic computations, which includes succinct versions of arithmetic circuit satisfiability and rank-one constraint system (R1CS) satisfiability. For a time-T computation, we obtain prover arithmetic complexity $$O(T \log T)$$ and verifier complexity polylog(T). These IOPs are the first to simultaneously achieve the state of the art in prover complexity, due to [14], and in verifier complexity, due to [7]. We also improve upon the query complexity of both schemes.The efficiency of our prover is a result of our highly optimized proof length; in particular, ours is the first construction that simultaneously achieves linear-size proofs and polylogarithmic-time verification, regardless of query complexity.
2019
TCC
Lower and Upper Bounds on the Randomness Complexity of Private Computations of AND
Abstract
We consider multi-party information-theoretic private protocols, and specifically their randomness complexity. The randomness complexity of private protocols is of interest both because random bits are considered a scarce resource, and because of the relation between that complexity measure and other complexity measures of boolean functions such as the circuit size or the sensitivity of the function being computed [12, 17].More concretely, we consider the randomness complexity of the basic boolean function and, that serves as a building block in the design of many private protocols. We show that and cannot be privately computed using a single random bit, thus giving the first non-trivial lower bound on the 1-private randomness complexity of an explicit boolean function, $$f: \{0,1\}^n \rightarrow \{0,1\}$$. We further show that the function and, on any number of inputs n (one input bit per player), can be privately computed using 8 random bits (and 7 random bits in the special case of $$n=3$$ players), improving the upper bound of 73 random bits implicit in [17]. Together with our lower bound, we thus approach the exact determination of the randomness complexity of and. To the best of our knowledge, the exact randomness complexity of private computation is not known for any explicit function (except for xor, which is trivially 1-random, and for several degenerate functions).
2019
TCC
Matrix PRFs: Constructions, Attacks, and Applications to Obfuscation
Abstract
We initiate a systematic study of pseudorandom functions (PRFs) that are computable by simple matrix branching programs; we refer to these objects as “matrix PRFs”. Matrix PRFs are attractive due to their simplicity, strong connections to complexity theory and group theory, and recent applications in program obfuscation.Our main results are:We present constructions of matrix PRFs based on the conjectured hardness of computational problems pertaining to matrix products.We show that any matrix PRF that is computable by a read-c, width w branching program can be broken in time poly$$(w^c)$$; this means that any matrix PRF based on constant-width matrices must read each input bit $$\omega (\log (\lambda ))$$ times. Along the way, we simplify the “tensor switching lemmas” introduced in previous IO attacks.We show that a subclass of the candidate local-PRG proposed by Barak et al. [Eurocrypt 2018] can be broken using simple matrix algebra.We show that augmenting the CVW18 IO candidate with a matrix PRF provably immunizes the candidate against all known algebraic and statistical zeroizing attacks, as captured by a new and simple adversarial model.
2019
TCC
New Approaches to Traitor Tracing with Embedded Identities
Abstract
In a traitor tracing (TT) system for n users, every user has his/her own secret key. Content providers can encrypt messages using a public key, and each user can decrypt the ciphertext using his/her secret key. Suppose some of the n users collude to construct a pirate decoding box. Then the tracing scheme has a special algorithm, called $$\mathsf {Trace}$$, which can identify at least one of the secret keys used to construct the pirate decoding box.Traditionally, the trace algorithm output only the ‘index’ associated with the traitors. As a result, to use such systems, either a central master authority must map the indices to actual identities, or there should be a public mapping of indices to identities. Both these options are problematic, especially if we need public tracing with anonymity of users. Nishimaki, Wichs, and Zhandry (NWZ) [Eurocrypt 2016] addressed this problem by constructing a traitor tracing scheme where the identities of users are embedded in the secret keys, and the trace algorithm, given a decoding box D, can recover the entire identities of the traitors. We call such schemes ‘Embedded Identity Traitor Tracing’ schemes. NWZ constructed such schemes based on adaptively secure functional encryption (FE). Currently, the only known constructions of FE schemes are based on nonstandard assumptions such as multilinear maps and iO.In this work, we study the problem of embedded identities TT based on standard assumptions. We provide a range of constructions based on different assumptions such as public key encryption (PKE), bilinear maps and the Learning with Errors (LWE) assumption. The different constructions have different efficiency trade offs. In our PKE based construction, the ciphertext size grows linearly with the number of users; the bilinear maps based construction has sub-linear ($$\sqrt{n}$$) sized ciphertexts. Both these schemes have public tracing. The LWE based scheme is a private tracing scheme with optimal ciphertexts (i.e., $$\log (n)$$). Finally, we also present other notions of traitor tracing, and discuss how they can be build in a generic manner from our base embedded identity TT scheme.
2019
TCC
Obfuscated Fuzzy Hamming Distance and Conjunctions from Subset Product Problems
Abstract
We consider the problem of obfuscating programs for fuzzy matching (in other words, testing whether the Hamming distance between an n-bit input and a fixed n-bit target vector is smaller than some predetermined threshold). This problem arises in biometric matching and other contexts. We present a virtual-black-box (VBB) secure and input-hiding obfuscator for fuzzy matching for Hamming distance, based on certain natural number-theoretic computational assumptions. In contrast to schemes based on coding theory, our obfuscator is based on computational hardness rather than information-theoretic hardness, and can be implemented for a much wider range of parameters. The Hamming distance obfuscator can also be applied to obfuscation of matching under the $$\ell _1$$ norm on $$\mathbb {Z}^n$$.We also consider obfuscating conjunctions. Conjunctions are equivalent to pattern matching with wildcards, which can be reduced in some cases to fuzzy matching. Our approach does not cover as general a range of parameters as other solutions, but it is much more compact. We study the relation between our obfuscation schemes and other obfuscators and give some advantages of our solution.
2019
TCC
On Fully Secure MPC with Solitary Output
Abstract
We study the possibility of achieving full security, with guaranteed output delivery, for secure multiparty computation of functionalities where only one party receives output, to which we refer as solitary functionalities. In the standard setting where all parties receive an output, full security typically requires an honest majority; otherwise even just achieving fairness is impossible. However, for solitary functionalities, fairness is clearly not an issue. This raises the following question: Is full security with no honest majority possible for all solitary functionalities?We give a negative answer to this question, by showing the existence of solitary functionalities that cannot be computed with full security. While such a result cannot be proved using fairness-based arguments, our proof builds on the classical proof technique of Cleve (STOC 1986) for ruling out fair coin-tossing and extends it in a nontrivial way.On the positive side, we show that full security against any number of malicious parties is achievable for many natural and useful solitary functionalities, including ones for which the multi-output version cannot be realized with full security.
2019
TCC
On Perfectly Secure 2PC in the OT-Hybrid Model
Abstract
A well known result by Kilian [22] (ACM 1988) asserts that general secure two computation (2PC) with statistical security, can be based on OT. Specifically, in the client-server model, where only one party – the client – receives an output, Kilian’s result shows that given the ability to call an ideal oracle that computes OT, two parties can securely compute an arbitrary function of their inputs with unconditional security. Ishai et al. [19] (EUROCRYPT 2011) further showed that this can be done efficiently for every two-party functionality in $$\mathrm {NC}^1$$ in a single round.However, their results only achieve statistical security, namely, it is allowed to have some error in security. This leaves open the natural question as to which client-server functionalities can be computed with perfect security in the OT-hybrid model, and what is the round complexity of such computation. So far, only a handful of functionalities were known to have such protocols. In addition to the obvious theoretical appeal of the question towards better understanding secure computation, perfect, as opposed to statistical reductions, may be useful for designing secure multiparty protocols with high concrete efficiency, achieved by eliminating the dependence on a security parameter.In this work, we identify a large class of client-server functionalities $$f:\mathcal {X}\times \mathcal {Y}\mapsto \{0,1\}$$, where the server’s domain $$\mathcal {X}$$ is larger than the client’s domain $$\mathcal {Y}$$, that have a perfect reduction to OT. Furthermore, our reduction is 1-round using an oracle to secure evaluation of many parallel invocations of $$\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}$$, as done by Ishai et al. [19] (EUROCRYPT 2011). Interestingly, the set of functions that we are able to compute was previously identified by Asharov [2] (TCC 2014) in the context of fairness in two-party computation, naming these functions full-dimensional. Our result also extends to randomized non-Boolean functions $$f: \mathcal {X}\times \mathcal {Y}\mapsto \left\{ 0,\ldots ,k-1\right\} $$ satisfying $$|\mathcal {X}|>(k-1)\cdot |\mathcal {Y}|$$.
2019
TCC
On the (In)security of Kilian-Based SNARGs
Abstract
The Fiat-Shamir transform is an incredibly powerful technique that uses a suitable hash function to reduce the interaction of general public-coin protocols. Unfortunately, there are known counterexamples showing that this methodology may not be sound (no matter what concrete hash function is used). Still, these counterexamples are somewhat unsatisfying, as the underlying protocols were specifically tailored to make Fiat-Shamir fail. This raises the question of whether this transform is sound when applied to natural protocols.One of the most important protocols for which we would like to reduce interaction is Kilian’s four-message argument system for all of
$$\mathsf {NP}$$
, based on collision resistant hash functions (
$$\mathsf {CRHF}$$
) and probabilistically checkable proofs (
$$\mathsf {PCP}$$
s). Indeed, an application of the Fiat-Shamir transform to Kilian’s protocol is at the heart of both theoretical results (e.g., Micali’s CS proofs) as well as leading practical approaches of highly efficient non-interactive proof-systems (e.g.,
$$\mathsf {SNARK}$$
s and
$$\mathsf {STARK}$$
s).In this work, we show significant obstacles to establishing soundness of (what we refer to as) the “Fiat-Shamir-Kilian-Micali” (
$$\mathsf {FSKM}$$
) protocol. More specifically:We construct a (contrived)
$$\mathsf {CRHF}$$
for which
$$\mathsf {FSKM}$$
is unsound for a very large class of
$$\mathsf {PCP}$$
s and for any Fiat-Shamir hash function. The collision-resistance of our
$$\mathsf {CRHF}$$
relies on very strong but plausible cryptographic assumptions. The statement is “tight” in the following sense: any
$$\mathsf {PCP}$$
outside the scope of our result trivially implies a
$$\mathsf {SNARK}$$
, eliminating the need for
$$\mathsf {FSKM}$$
in the first place.Second, we consider a known extension of Kilian’s protocol to an interactive variant of
$$\mathsf {PCP}$$
s called probabilistically checkable interactive proofs (
$$\mathsf {PCIP})$$
(also known as interactive oracle proofs or
$$\mathsf {IOP}$$
s). We construct a particular (contrived)
$$\mathsf {PCIP}$$
for
$$\mathsf {NP}$$
for which the
$$\mathsf {FSKM}$$
protocol is unsound no matter what
$$\mathsf {CRHF}$$
and Fiat-Shamir hash function is used. This result is unconditional (i.e., does not rely on any cryptographic assumptions).
Put together, our results show that the soundness of
$$\mathsf {FSKM}$$
must rely on some special structure of both the
$$\mathsf {CRHF}$$
and
$$\mathsf {PCP}$$
that underlie Kilian’s protocol. We believe these negative results may cast light on how to securely instantiate the
$$\mathsf {FSKM}$$
protocol by a synergistic choice of the
$$\mathsf {PCP}$$
,
$$\mathsf {CRHF}$$
, and Fiat-Shamir hash function.
2019
TCC
On the Complexity of Collision Resistant Hash Functions: New and Old Black-Box Separations
Abstract
The complexity of collision-resistant hash functions has been long studied in the theory of cryptography. While we often think about them as a Minicrypt primitive, black-box separations demonstrate that constructions from one-way functions are unlikely. Indeed, theoretical constructions of collision-resistant hash functions are based on rather structured assumptions.We make two contributions to this study: 1.A New Separation: We show that collision-resistant hashing does not imply hard problems in the class Statistical Zero Knowledge in a black-box way.2.New Proofs: We show new proofs for the results of Simon, ruling out black-box reductions of collision-resistant hashing to one-way permutations, and of Asharov and Segev, ruling out black-box reductions to indistinguishability obfuscation. The new proofs are quite different from the previous ones and are based on simple coupling arguments.
2019
TCC
Optimal Bounded-Collusion Secure Functional Encryption
Abstract
We construct private-key and public-key functional encryption schemes in the bounded-key setting; that is, secure against adversaries that obtain an a-priori bounded number of functional keys (also known as the collusion bound).An important metric considered in the literature on bounded-key functional encryption schemes is the dependence of the running time of the encryption algorithm on the collusion bound
$$Q=Q(\lambda )$$
(where
$$\lambda $$
is the security parameter). It is known that bounded-key functional encryption schemes with encryption complexity growing with
$$Q^{1-\varepsilon }$$
, for any constant
$$\varepsilon > 0$$
, implies indistinguishability obfuscation. On the other hand, in the public-key setting, it was previously unknown whether we could achieve encryption complexity growing linear with Q, also known as optimal bounded-key FE, based on well-studied assumptions.In this work, we give the first construction of an optimal bounded-key public-key functional encryption scheme under the minimal assumption of the existence of any public-key encryption scheme. Moreover, our scheme supports the class of all polynomial-size circuits.Our techniques also extend to the private-key setting. We achieve a construction of an optimal bounded-key functional encryption in the private-key setting based on the minimal assumption of one-way functions, instead of learning with errors as achieved in prior works.
2019
TCC
Permuted Puzzles and Cryptographic Hardness
Abstract
A permuted puzzle problem is defined by a pair of distributions
$$\mathcal{D}_0,\mathcal{D}_1$$
over
$$\varSigma ^n$$
. The problem is to distinguish samples from
$$\mathcal{D}_0,\mathcal{D}_1$$
, where the symbols of each sample are permuted by a single secret permutation
$$\pi $$
of [n].The conjectured hardness of specific instances of permuted puzzle problems was recently used to obtain the first candidate constructions of Doubly Efficient Private Information Retrieval (DE-PIR) (Boyle et al. & Canetti et al., TCC’17). Roughly, in these works the distributions
$$\mathcal{D}_0,\mathcal{D}_1$$
over
$${\mathbb F}^n$$
are evaluations of either a moderately low-degree polynomial or a random function. This new conjecture seems to be quite powerful, and is the foundation for the first DE-PIR candidates, almost two decades after the question was first posed by Beimel et al. (CRYPTO’00). However, while permuted puzzles are a natural and general class of problems, their hardness is still poorly understood.We initiate a formal investigation of the cryptographic hardness of permuted puzzle problems. Our contributions lie in three main directions:
Rigorous formalization. We formalize a notion of permuted puzzle distinguishing problems, extending and generalizing the proposed permuted puzzle framework of Boyle et al. (TCC’17).Identifying hard permuted puzzles. We identify natural examples in which a one-time permutation provably creates cryptographic hardness, based on “standard” assumptions. In these examples, the original distributions
$$\mathcal{D}_0,\mathcal{D}_1$$
are easily distinguishable, but the permuted puzzle distinguishing problem is computationally hard. We provide such constructions in the random oracle model, and in the plain model under the Decisional Diffie-Hellman (DDH) assumption. We additionally observe that the Learning Parity with Noise (LPN) assumption itself can be cast as a permuted puzzle.Partial lower bound for the DE-PIR problem. We make progress towards better understanding the permuted puzzles underlying the DE-PIR constructions, by showing that a toy version of the problem, introduced by Boyle et al. (TCC’17), withstands a rich class of attacks, namely those that distinguish solely via statistical queries.
2019
TCC
Predicate Encryption from Bilinear Maps and One-Sided Probabilistic Rank
Abstract
In predicate encryption for a function f, an authority can create ciphertexts and secret keys which are associated with ‘attributes’. A user with decryption key $$K_y$$ corresponding to attribute y can decrypt a ciphertext $$CT_x$$ corresponding to a message m and attribute x if and only if $$f(x,y)=0$$. Furthermore, the attribute x remains hidden to the user if $$f(x,y) \ne 0$$.We construct predicate encryption from assumptions on bilinear maps for a large class of new functions, including sparse set disjointness, Hamming distance at most k, inner product mod 2, and any function with an efficient Arthur-Merlin communication protocol. Our construction uses a new probabilistic representation of Boolean functions we call ‘one-sided probabilistic rank,’ and combines it with known constructions of inner product encryption in a novel way.
2019
TCC
Secure Computation with Preprocessing via Function Secret Sharing
Abstract
We propose a simple and powerful new approach for secure computation with input-independent preprocessing, building on the general tool of function secret sharing (FSS) and its efficient instantiations. Using this approach, we can make efficient use of correlated randomness to compute any type of gate, as long as a function class naturally corresponding to this gate admits an efficient FSS scheme. Our approach can be viewed as a generalization of the “TinyTable” protocol of Damgård et al. (Crypto 2017), where our generalized variant uses FSS to achieve exponential efficiency improvement for useful types of gates.By instantiating this general approach with efficient PRG-based FSS schemes of Boyle et al. (Eurocrypt 2015, CCS 2016), we can implement useful nonlinear gates for equality tests, integer comparison, bit-decomposition and more with optimal online communication and with a relatively small amount of correlated randomness. We also provide a unified and simplified view of several existing protocols in the preprocessing model via the FSS framework.Our positive results provide a useful tool for secure computation tasks that involve secure integer comparisons or conversions between arithmetic and binary representations. These arise in the contexts of approximating real-valued functions, machine-learning classification, and more. Finally, we study the necessity of the FSS machinery that we employ, in the simple context of secure string equality testing. First, we show that any “online-optimal” secure equality protocol implies an FSS scheme for point functions, which in turn implies one-way functions. Then, we show that information-theoretic secure equality protocols with relaxed optimality requirements would follow from the existence of big families of “matching vectors.” This suggests that proving strong lower bounds on the efficiency of such protocols would be difficult.
2019
TCC
Statistical Difference Beyond the Polarizing Regime
Abstract
The polarization lemma for statistical distance (
$${\text {SD}}$$
), due to Sahai and Vadhan (JACM, 2003), is an efficient transformation taking as input a pair of circuits
$$(C_0,C_1)$$
and an integer k and outputting a new pair of circuits
$$(D_0,D_1)$$
such that if
$${\text {SD}}(C_0,C_1) \ge \alpha $$
then
$${\text {SD}}(D_0,D_1) \ge 1-2^{-k}$$
and if
$${\text {SD}}(C_0,C_1) \le \beta $$
then
$${\text {SD}}(D_0,D_1) \le 2^{-k}$$
. The polarization lemma is known to hold for any constant values
$$\beta < \alpha ^2$$
, but extending the lemma to the regime in which
$$\alpha ^2 \le \beta < \alpha $$
has remained elusive. The focus of this work is in studying the latter regime of parameters. Our main results are: 1.Polarization lemmas for different notions of distance, such as Triangular Discrimination (
$${{\,\mathrm{TD}\,}}$$
) and Jensen-Shannon Divergence (
$${{\,\mathrm{JS}\,}}$$
), which enable polarization for some problems where the statistical distance satisfies
$$ \alpha ^2< \beta < \alpha $$
. We also derive a polarization lemma for statistical distance with any inverse-polynomially small gap between
$$ \alpha ^2 $$
and
$$ \beta $$
(rather than a constant).2.The average-case hardness of the statistical difference problem (i.e., determining whether the statistical distance between two given circuits is at least
$$\alpha $$
or at most
$$\beta $$
), for any values of
$$\beta < \alpha $$
, implies the existence of one-way functions. Such a result was previously only known for
$$\beta < \alpha ^2$$
.3.A (direct) constant-round interactive proof for estimating the statistical distance between any two distributions (up to any inverse polynomial error) given circuits that generate them. Proofs of closely related statements have appeared in the literature but we give a new proof which we find to be cleaner and more direct.
2019
TCC
Stronger Lower Bounds for Online ORAM
Abstract
Oblivious RAM (ORAM), introduced in the context of software protection by Goldreich and Ostrovsky [JACM’96], aims at obfuscating the memory access pattern induced by a RAM computation. Ideally, the memory access pattern of an ORAM should be independent of the data being processed. Since the work of Goldreich and Ostrovsky, it was believed that there is an inherent $$ \varOmega (\log n) $$ bandwidth overhead in any ORAM working with memory of size n. Larsen and Nielsen [CRYPTO’18] were the first to give a general $$ \varOmega (\log n) $$ lower bound for any online ORAM, i.e., an ORAM that must process its inputs in an online manner.In this work, we revisit the lower bound of Larsen and Nielsen, which was proved under the assumption that the adversarial server knows exactly which server accesses correspond to which input operation. We give an $$\varOmega (\log n) $$ lower bound for the bandwidth overhead of any online ORAM even when the adversary has no access to this information. For many known constructions of ORAM this information is provided implicitly as each input operation induces an access sequence of roughly the same length. Thus, they are subject to the lower bound of Larsen and Nielsen. Our results rule out a broader class of constructions and specifically, they imply that obfuscating the boundaries between the input operations does not help in building a more efficient ORAM.As our main technical contribution and to handle the lack of structure, we study the properties of access graphs induced naturally by the memory access pattern of an ORAM computation. We identify a particular graph property that can be efficiently tested and that all access graphs of ORAM computation must satisfy with high probability. This property is reminiscent of the Larsen-Nielsen property but it is substantially less structured; that is, it is more generic.
2019
TCC
Succinct Arguments in the Quantum Random Oracle Model
Abstract
Succinct non-interactive arguments (SNARGs) are highly efficient certificates of membership in non-deterministic languages. Constructions of SNARGs in the random oracle model are widely believed to be post-quantum secure, provided the oracle is instantiated with a suitable post-quantum hash function. No formal evidence, however, supports this belief.In this work we provide the first such evidence by proving that the SNARG construction of Micali is unconditionally secure in the quantum random oracle model. We also prove that, analogously to the classical case, the SNARG inherits the zero knowledge and proof of knowledge properties of the PCP underlying the Micali construction. We thus obtain the first zero knowledge SNARG of knowledge (zkSNARK) that is secure in the quantum random oracle model.Our main tool is a new lifting lemma that shows how, for a rich class of oracle games, we can generically deduce security against quantum attackers by bounding a natural classical property of these games. This means that in order to prove our theorem we only need to establish classical properties about the Micali construction. This approach not only lets us prove post-quantum security but also enables us to prove explicit bounds that are tight up to small factors.We additionally use our techniques to prove that SNARGs based on interactive oracle proofs (IOPs) with round-by-round soundness are unconditionally secure in the quantum random oracle model. This result establishes the post-quantum security of many SNARGs of practical interest.
2019
TCC
Synchronous Consensus with Optimal Asynchronous Fallback Guarantees
Abstract
Typically, protocols for Byzantine agreement (BA) are designed to run in either a synchronous network (where all messages are guaranteed to be delivered within some known time $$\varDelta $$ from when they are sent) or an asynchronous network (where messages may be arbitrarily delayed). Protocols designed for synchronous networks are generally insecure if the network in which they run does not ensure synchrony; protocols designed for asynchronous networks are (of course) secure in a synchronous setting as well, but in that case tolerate a lower fraction of faults than would have been possible if synchrony had been assumed from the start.Fix some number of parties n, and $$0< t_a< n/3 \le t_s < n/2$$. We ask whether it is possible (given a public-key infrastructure) to design a BA protocol that is resilient to (1) $$t_s$$ corruptions when run in a synchronous network and (2) $$t_a$$ faults even if the network happens to be asynchronous. We show matching feasibility and infeasibility results demonstrating that this is possible if and only if $$t_a + 2\cdot t_s < n$$.
2019
TCC
The Function-Inversion Problem: Barriers and Opportunities
Abstract
★ Best Young Researcher
The task of function inversion is central to cryptanalysis: breaking block ciphers, forging signatures, and cracking password hashes are all special cases of the function-inversion problem. In 1980, Hellman showed that it is possible to invert a random function $$f{:}\,[N] \rightarrow [N]$$ in time $$T = \widetilde{O}(N^{2/3})$$ given only $$S = \widetilde{O}(N^{2/3})$$ bits of precomputed advice about f. Hellman’s algorithm is the basis for the popular “Rainbow Tables” technique (Oechslin 2003), which achieves the same asymptotic cost and is widely used in practical cryptanalysis.Is Hellman’s method the best possible algorithm for inverting functions with preprocessed advice? The best known lower bound, due to Yao (1990), shows that $$ST = \widetilde{\Omega }(N)$$, which still admits the possibility of an $$S = T = \widetilde{O}(N^{1/2})$$ attack. There remains a long-standing and vexing gap between Hellman’s $$N^{2/3}$$ upper bound and Yao’s $$N^{1/2}$$ lower bound. Understanding the feasibility of an $$S = T = N^{1/2}$$ algorithm is cryptanalytically relevant since such an algorithm could perform a key-recovery attack on AES-128 in time $$2^{64}$$ using a precomputed table of size $$2^{64}$$.For the past 29 years, there has been no progress either in improving Hellman’s algorithm or in strengthening Yao’s lower bound. In this work, we connect function inversion to problems in other areas of theory to (1) explain why progress may be difficult and (2) explore possible ways forward.Our results are as follows:We show that any improvement on Yao’s lower bound on function-inversion algorithms will imply new lower bounds on depth-two circuits with arbitrary gates. Further, we show that proving strong lower bounds on non-adaptive function-inversion algorithms would imply breakthrough circuit lower bounds on linear-size log-depth circuits.We take first steps towards the study of the injective function-inversion problem, which has manifold cryptographic applications. In particular, we show that improved algorithms for breaking PRGs with preprocessing would give improved algorithms for inverting injective functions with preprocessing.Finally, we show that function inversion is closely related to well-studied problems in communication complexity and data structures. Through these connections we immediately obtain the best known algorithms for problems in these domains.
2019
TCC
Tighter Proofs of CCA Security in the Quantum Random Oracle Model
Abstract
We revisit the construction of IND-CCA secure key encapsulation mechanisms (KEM) from public-key encryption schemes (PKE). We give new, tighter security reductions for several constructions. Our main result is an improved reduction for the security of the
$$U^{\not \bot }$$
-transform of Hofheinz, Hövelmanns, and Kiltz (TCC’17) which turns OW-CPA secure deterministic PKEs into IND-CCA secure KEMs. This result is enabled by a new one-way to hiding (O2H) lemma which gives a tighter bound than previous O2H lemmas in certain settings and might be of independent interest. We extend this result also to the case of PKEs with non-zero decryption failure probability and non-deterministic PKEs. However, we assume that the derandomized PKE is injective with overwhelming probability.In addition, we analyze the impact of different variations of the
$$U^{\not \bot }$$
-transform discussed in the literature on the security of the final scheme. We consider the difference between explicit (
$$U^{\bot }$$
) and implicit (
$$U^{\not \bot }$$
) rejection, proving that security of the former implies security of the latter. We show that the opposite direction holds if the scheme with explicit rejection also uses key confirmation. Finally, we prove that (at least from a theoretic point of view) security is independent of whether the session keys are derived from message and ciphertext (
$$U^{\not \bot }$$
) or just from the message (
$$U^{\not \bot }_m$$
).