Today was my first full lecture this Fall. You’d think by now I’d be quite used to lecturing. But half way through, I was already feeling tired — maybe I should have had a better breakfast More likely, it is just all the summer idleness that is taking its toll.
Another possible reason for being exhausted is that I was packing material from some 3-5 lectures of past courses into a single lecture. Admittedly it was a little brisk, but hey, it wasn’t that unreasonable since we didn’t attempt any proofs in class. Nevertheless, a little bit of an explanatory text will be useful, since the textbooks don’t do it the same way we did it. So here you go.
In keeping with the idea of teaching as few ideas as possible, we have essentially only one security definition for a variety of tasks and settings. These are definitions in the “real/ideal paradigm” or simulation based security. I discussed the basic philosophy here and here. A little more follows.
The security definition presents an “ideal world” where the primitive (message transmission, in the case of encryption) is modeled as an ideal functionality (a channel which the adversary can’t read from). It also presents a “real world” model, where the encryption scheme (whose security we are defining) is deployed; this real world should model all the power the adversary could possibly have. Roughly, the security definition says that whatever an adversary can make happen in the real world, an adversary could have made happen in the ideal world too. Both these worlds come with the following cast of characters: the parties using the primitive (i.e., Alice and Bob who are participating in the message transmission), the adversary (Eve) and the environment. The Environment is perhaps the key novelty here.
The Environment. It models where the inputs to the parties (message to be encrypted by Alice) come from, and where the outputs from the parties (message obtained by Bob by decrypting) go to. Note that, Alice and Bob stand for merely the encryption/decryption (+key-generation) algorithms; so Alice must get the message from outside, and Bob must output the message to the outside. That outside is part of the environment. But there is more to the environment. The adversary might have arbitrary influence over, or have arbitrary information about, the inputs to Alice. So we let the environment interact with the adversary through out its life time.
The security requirement is that in the real world and in the ideal world, the environment behaves (almost) identically. What do we mean by behaves? The environment is modeled as producing an output at the end of its life. But this could in fact be the entire history of the environment’s life — all the messages it ever received and the outcome of all the coins it ever flipped (which together determine the rest of its behavior). Then the behavior of the environment in the experiment — the real world or the ideal world — is defined by the probability distribution over the various output strings that it can produce. (The probability is over the coin tosses of all the entities in the world — the parties, the adversary and the environment itself.) But it turns out that without loss of generality, we can consider only environments which output a single bit. [DDet (Why?)]
Let be the set of outputs produced by an environment . For each , consider modifying the environment to a new environment which outputs 1 iff the output of is in . Now, if the output distributions in the two experiments involving (i.e., real and ideal worlds) are not identical, then there is a non-empty set of outputs for which the probability in the real world is higher than that in the ideal world. Then the probability of outputting 1 in the two experiments will be different too. Hence if some environment with a long output behaves differently in the real and ideal worlds, then some environment which outputs a single bit behaves differently too.[/DDET]
Absolute vs. Relative Security. Note that if we are considering arbitrary environments interacting arbitrarily with the adversary, we can’t provide any meaningful absolute guarantees. For instance, we can’t say that the adversary will not learn the first bit of the message, because the environment might just give it away. Instead, we are interested in providing relative security guarantees. (This indeed was the point of this earlier note.) The simulation based definitions make this quite explicit.
Technical vs. Convincing definitions. All the SIM-* definitions we saw in the class have counterparts among IND-* definitions. (And this will be so when we get to public-key encryption as well.) The IND-* definitions, namely IND-CPA and IND-CCA, are more “standard.” The IND definitions are remarkably insightful and delightfully simple (if you ask a connoisseur, that is), but can be quite cryptic (no pun intended) in that it is not immediately clear why that security definition is the “right” one. On the other hand a SIM definition is some sort of a catch-all definition: it doesn’t necessarily get to the technical heart of the matter, but makes clear what security guarantee is being provided (namely, “you can imagine you are in the ideal world”).
When an IND definition (plus a correctness requirement) turns out to be equivalent to a SIM definition, one can think of the former as having distilled out the technical essence of the SIM definition. Typically, an IND definition describes a very specific environment — a single environment/experiment that is capable of testing the security of a scheme. Then, if you want to prove that a scheme is secure, you’d probably want to work with an IND definition, because writing a direct proof for the SIM definition can often be quite tedious. On the other hand, the security guarantee of a SIM definition would typically be easier to exploit directly, if you want to use your scheme in a complex environment (say, inside a larger protocol).