Discussion:
[ietf-dkim] Introducing myself
Charles Lindsey
2006-10-30 21:42:21 UTC
Permalink
Firstly, let me apologise for not appearing on this list earlier, but I
only became aware of this project a little over a week ago, and I have
been studying the documents carefully since then, as time permitted.

I am familiar with the two existing schemes for signing headers of
messages, namely
PGPVERIFY, for authenticating control messages in Netnews
PGPMOOSE, for authenticating articles posted to moderated newsgroups
and I have experience of both sending and acting upon PGPVERIFY messages,
and of hacking code to process them.

Moreover, at a time when the ietf-usefor WG was considering a replacement
for PGPVERIFY (which has some technical problems, and it not in a fit
state for standardization as it stands) I wrote a draft for a complete
header signing scheme, although the Usefor WG decided at that time not to
proceed with it as it was having trouble enough dealing with more pressing
issues. It is, in principle, still on the list of future work for that WG.

My draft has long since expired as an ID, but it may still be seen at
http://www.imc.org/ietf-usefor/drafts/draft-lindsey-usefor-signed-01.txt
and it may be of interest to members of this list. It has many similarities
with the DKIM-base, but also many differences, in particular a somewhat
more
aggressive canonicalization.

At that time, I tried to interest the ietf-822 mailing list in it, but the
Grandees on that group informed me, in no uncertain terms, that signing of
email headers was a totally unnecessary concept that would never be of any
practical use :-( . Nevertheless, I still took care to ensure that my
draft was workable both for Email and Netnews.

On studying the DKIM-base document, I find many features that are
excellent, a few that are perplexing, and a couple that I consider
downright harmful. But as a newcomer to this list, and particularly AIUI
you are trying to get this proposal finalized as soon as possible, it
would be inappropriate for me to barge in with a long series of problems
and counter-proposals.

So I will, instead, stick to asking questions which I hope some member of
this list will be kind enough to answer, and then some of my perplexities
will hopefully be reduced.

Note that these comments are in terms of the dkim-base-05 draft. I have
looked briefly at draft-06, particularly at its list of changes from
draft-05, but it seems most of what I wanted to say still applies.
3.1 Selectors
Is a <selector> case-insensitive as domain-names are? And is it to be
rendered in IDNA if a Non-ASCII charset is involved?
3.2 Tag=Value Lists
INFORMATIVE IMPLEMENTATION NOTE: Although the "plain text"
defined below (as "tag-value") only includes 7-bit characters, an
implementation that wished to anticipate future standards would be
advised to not preclude the use of UTF8-encoded text in tag=value
lists.
Those future standards are nearer than you think. The currently active
ietf-eai WG is charged with producing an experimental protocol for writing
headers in UTF-8. Would it not be wiser to make support for arbitrary
octets (except those essential for parsing such as ";", CR, LF, etc) a
MUST accept right from the start?

As a matter of interest, why don't <tag>s use the same syntax as <token>s,
which appear in similar contexts in RFC 2045 and other places (but without
any hint of CFWS around them, of course). And are <tag-name>s case
insensitive?
3.3.3 Other algorithms
Presumably there is nothing to prevent allowing PGP as the signing
algorithm in the future, if someone makes out a good case for it.
3.4 Canonicalization
In what circumstances is the 'simple' canonicalization inappropriate, and
why is it the default?

Is it not the case that the "meaning" of a message is, according to RFC
2822 etc., unaffected by changes of folding, or of case of header-names,
or of CTE, or of encodings or re-encodings using RFC 2047 or RFC 2231? And
hence any canonicalization that preserves "meaning" cannot do any harm?
Anyway, I shall return to this when I come to section 5.3.
3.4.2 The "relaxed" Header Field Canonicalization Algorithm
Is it possible that re-folding a structured header en route will introduce
a WSP that was not there beforehand, and thus break the signature? I
avoided this in my own draft by ignoring all WSP in headers, except when
inside comments and quoted-strings and in structured headers such as
Subject.
3.5 The DKIM-Signature header field
Although your charter forbids you from discussing non-repudiation,
authorization, and other matters not strictly relevant for DKIM, it is to
be envisaged that other applications will arise from time to time
requiring signatures over headers, and it would be unfortunate if each
such application had to invent Yet-Another-Signing-Protocol when a simple
adaptation of what you have written would have sufficed. There are already
too many only-slightly-different-wheels in existence for us to be
inventing any more. Surely, a facility for signing headers should be
described as a tool which can then be used for various applications in
future, of which DKIM would be just the first? So why was this approach
not taken?

In fact, you almost made it. The only features which might make it hard
for future applications that I can see are the appearance of "DKIM" in
your newly invented "DKIM-Signature" header (it rather needs an
'application' tag in the signature to indicate why the signature was
made), and the insistence that the d= and s= tags, which together identify
the owner of the key, should be syntactically of the form of domain-names
(which might be totally inappropriate for those other applications, though
it should clearly be required when the application is DKIM).

Can the various tags appear in this header in any order? OTOH, why is
there not an insistence that the b= tag should come last (since it has to
be easily joined to and separated from the rest)?
v= Version (MUST be included).
Does the version relate to the version of the algorithm identified by the
a= tag, or to the version of dkim-base as a whole? IOW, if someone invents
a new tag, or a new tag-value, that can be safely ignored by existing
implementations, is it necessary to invent a new version?
bh= The hash of the canonicalized body part of the message
Yes, I like this, since it enables some useful information to be recovered
if the header hash succeeds but the body hash fails.
d= The domain of the signing entity
Is this case-insensitive?
h= Signed header fields
Why MUST NOT this list be empty? Suppose you want to sign the body, but
not any headers? Unusual, but perhaps sensible for some application. No
interoperability arises.

I don't understand the remark about "message/rfc822 content types". How
can this problem arise?
i= Identity of the user or agent
Is this case-insensitive (I might expect a different answer there for the
<local-part> and the <domain-name>)?

Why MUST the <domain-name> be a subdomain of the d= tag (and why not of
the s= tag, and what interoperability arises anyway)?

Must this tag, if a <local-part> is present, be a valid working email
address?
l= Body length count
I am very suspicious of the propriety of suggesting, in any IETF standard,
that it is legitimate to remove text from a message being conveyed
(certainly without the consent of the recipient). Surely marking it with
blood-red ink, or warnings in 32pt characters is as far as one should go?
q= A colon-separated list of query methods used to retrieve the
public key
Clearly, the use of DNS or some similar global database is the only
sensible PKI that is workable for DKIM. But am I right in saying that this
tag does not preclude the use of other PKIs for other applications (e.g.
attached certificates, web-of-trust, private agreements between the
communicating parties, etc.)?

Why MUST signers support "dns/ext" (clearly, verifiers MUST)? Surely a
signer who, as a matter of policy, always chooses to use some other query
method, is not obliged to implement something he is never going to use.
s= The Selector subdividing the namespace
Case-insensitive?
t= Signature Timestamp
... The format is the number of seconds
since 00:00:00 on January 1, 1970 in the UTC time zone. ...
Strictly speaking not true, since the usual UNIX algorithm for calculating
this quantity takes no account of leap seconds. I presume this is all laid
down in POSIX somewhere.

And expecting this to work up to AD 200,000 seems an overkill (though
beyond 2038 would be helpful).
z= Copied header fields
Verifiers MUST NOT use the header field names or copied values
for checking the signature in any way. Copied header field
values are for diagnostic use only.
Why ever not? I can think of examples where a verifier might find it
exceedingly useful to be aware of the original state of some header which
might have been changed somewhere en route. And what potential
interoperability arises if a verifier makes some use of this information?
3.6 Key Management and Representation
public_key = dkim_find_key(q_val, d_val, s_val)
I do not find the operation 'dkim_find_key' defined or used anywhere else
in the draft.
3.6.1 Textual Representation
h= Acceptable hash algorithms
... Signers and Verifiers MUST
support the "sha256" hash algorithm. Verifiers MUST also support
the "sha1" hash algorithm.
Why MUST signers support the "sha256" hash algorithm (clearly, verifiers
MUST)? Surely a signer who, as a matter of policy, always chooses to use
sha-1 is not obliged to implement something he is never going to use?
k= Key type (plain-text; OPTIONAL, default is "rsa"). Signers and
verifiers MUST support the "rsa" key type.
Why MUST signers support the "rsa" key type (clearly, verifiers MUST)?
Surely a signer who, as a matter of policy, always chooses to use some
other key type is not obliged to implement something he is never going to
use?
3.7 Computing the Message Hashes
.... The header field MUST be presented to
the hash algorithm after the body of the message ...
Does "The header field" mean 'The "DKIM-Signature" header field under
construction'?
4. Semantics of Multiple Signatures
Signers should be cognizant that signing DKIM-Signature headers may
result in signature failures with intermediaries that do not
recognize that DKIM-Signatures are trace headers and unwittingly
reorder them.
This method of relying on the order of headers to distinguish between
multiple signatures seems far from robust. I would be happy to describe an
alternative and more reliable method, applicable at least to signing other
signatures, that I have in mind (but I promised to stick to questions for
now :-) ).
... For
example, a verifier that by policy chooses not to accept signatures
with deprecated cryptographic algorithms should consider such
signatures invalid. As with messages with a single signature,
verifiers are at liberty to use the presence of valid signatures as
an input to local policy; ...
Where are "valid" and "invalid" defined, and is "invalid" synonymous with
"failed"? I would hope not, but it is not clear.
5.1 Determine if the Email Should be Signed and by Whom
INFORMATIVE IMPLEMENTER ADVICE: SUBMISSION servers should not
sign Received header fields if the outgoing gateway MTA obfuscates
Received header fields, for example to hide the details of
internal topology.
I see several mentions of signing Received headers. Signing of any header
that may occur multiple times in a message is always risky (though I can
see a necessity for it in a few cases). Under what circumstances would
including a Received header within a signature provide a security benefit
(in the sense of countering some scam or threat) commensurate with this
risk?
5.3 Normalize the Message to Prevent Transport Conversions
I found this section absolutely astounding.

Message bodies written in Non-ASCII charsets have been commonplace now for
12 or more years, and they are most readily represented as 8-bit. 8BITMIME
has been around for the same length of time and is now almost universally
deployed. 8bit using 8BITMIME has become, or is well on the way to
becoming, the preferred CTE for charsets which will not fit into 7bits.
And yet you are now seriously proposing, for a protocol that needs to be
used in the great majority of future email messages if it is to fulfill
its purpose, to return to encodings that can be squashed into 7bits. That
is one monumental step backwards for the IETF.

Moreover, even as we speak, the ietf-eai WG, which is chartered to bring
in headers using UTF-8 and which can then nearly always be read and
understood by examination of the code as seen on the wire, is advocating
the universal use of 8BITMIME (and more) except when interfacing with
legacy systems which will, hopefully, have faded away by the time the next
12 years or so have elapsed.

And all this is entirely unnecessary. As I have already said, the
"meaning" of any email message is, by definition, independent of the CTE
with which it is transported. All you have to do is to arrange that the
canonicalization decodes any Quoted-Printable or Base64 that is
encountered, and uses the result of that in computing any hash. Was this
option considered and, if so, why was it rejected?
5.4 Determine the header fields to Sign
The
Eliot Lear
2006-10-31 06:27:14 UTC
Permalink
Hi Charles, and welcome. I can't answer many of your questions, but I
can certainly take a whack at a few.
Post by Charles Lindsey
3.3.3 Other algorithms
Presumably there is nothing to prevent allowing PGP as the signing
algorithm in the future, if someone makes out a good case for it.
PGP, to the best of my knowledge, is neither a hash nor a cryptographic
algorithm, but makes use of both. We should anticipate new algorithms
for each in the future, but this is not something we want to have a
plethora of, lest we fragment the installed base.
Post by Charles Lindsey
3.4 Canonicalization
In what circumstances is the 'simple' canonicalization inappropriate, and
why is it the default?
Is it not the case that the "meaning" of a message is, according to RFC
2822 etc., unaffected by changes of folding, or of case of header-names,
or of CTE, or of encodings or re-encodings using RFC 2047 or RFC 2231? And
hence any canonicalization that preserves "meaning" cannot do any harm?
Anyway, I shall return to this when I come to section 5.3.
3.4.2 The "relaxed" Header Field Canonicalization Algorithm
Is it possible that re-folding a structured header en route will introduce
a WSP that was not there beforehand, and thus break the signature? I
avoided this in my own draft by ignoring all WSP in headers, except when
inside comments and quoted-strings and in structured headers such as
Subject.
3.5 The DKIM-Signature header field
Although your charter forbids you from discussing non-repudiation,
authorization, and other matters not strictly relevant for DKIM, it is to
be envisaged that other applications will arise from time to time
requiring signatures over headers, and it would be unfortunate if each
such application had to invent Yet-Another-Signing-Protocol when a simple
adaptation of what you have written would have sufficed. There are already
too many only-slightly-different-wheels in existence for us to be
inventing any more. Surely, a facility for signing headers should be
described as a tool which can then be used for various applications in
future, of which DKIM would be just the first? So why was this approach
not taken?
In fact it is. The DNS record contains s= for this purpose, and the
purpose of the signature contained within the message can be inferred by
the application invoking it ;-)
Post by Charles Lindsey
In fact, you almost made it. The only features which might make it hard
for future applications that I can see are the appearance of "DKIM" in
your newly invented "DKIM-Signature" header (it rather needs an
'application' tag in the signature to indicate why the signature was
made), and the insistence that the d= and s= tags, which together identify
the owner of the key, should be syntactically of the form of domain-names
(which might be totally inappropriate for those other applications, though
it should clearly be required when the application is DKIM).
Can the various tags appear in this header in any order? OTOH, why is
there not an insistence that the b= tag should come last (since it has to
be easily joined to and separated from the rest)?
I can't speak to anyone's implementation, but general SMTP folding rules
apply, no?
Post by Charles Lindsey
v= Version (MUST be included).
Does the version relate to the version of the algorithm identified by the
a= tag, or to the version of dkim-base as a whole? IOW, if someone invents
a new tag, or a new tag-value, that can be safely ignored by existing
implementations, is it necessary to invent a new version?
Version of DKIM, but I think this is a valid point.
Post by Charles Lindsey
bh= The hash of the canonicalized body part of the message
Yes, I like this, since it enables some useful information to be recovered
if the header hash succeeds but the body hash fails.
d= The domain of the signing entity
Is this case-insensitive?
h= Signed header fields
Why MUST NOT this list be empty? Suppose you want to sign the body, but
not any headers? Unusual, but perhaps sensible for some application. No
interoperability arises.
I don't understand the remark about "message/rfc822 content types". How
can this problem arise?
i= Identity of the user or agent
Is this case-insensitive (I might expect a different answer there for the
<local-part> and the <domain-name>)?
Why MUST the <domain-name> be a subdomain of the d= tag (and why not of
the s= tag, and what interoperability arises anyway)?
Must this tag, if a <local-part> is present, be a valid working email
address?
l= Body length count
I am very suspicious of the propriety of suggesting, in any IETF standard,
that it is legitimate to remove text from a message being conveyed
(certainly without the consent of the recipient). Surely marking it with
blood-red ink, or warnings in 32pt characters is as far as one should go?
What if it's executable code inserted by an attacker?
Post by Charles Lindsey
q= A colon-separated list of query methods used to retrieve the
public key
Clearly, the use of DNS or some similar global database is the only
sensible PKI that is workable for DKIM. But am I right in saying that this
tag does not preclude the use of other PKIs for other applications (e.g.
attached certificates, web-of-trust, private agreements between the
communicating parties, etc.)?
Why MUST signers support "dns/ext" (clearly, verifiers MUST)? Surely a
signer who, as a matter of policy, always chooses to use some other query
method, is not obliged to implement something he is never going to use.
You can do what you want between two parties with pre-arrangement, but
that's not the interesting case DKIM is attempting to solve. In fact, a
primary purpose of DKIM is to enable trusted introducers. And so a
single common method is needed for interoperability.
Post by Charles Lindsey
s= The Selector subdividing the namespace
Case-insensitive?
t= Signature Timestamp
... The format is the number of seconds
since 00:00:00 on January 1, 1970 in the UTC time zone. ...
Strictly speaking not true, since the usual UNIX algorithm for
calculating
this quantity takes no account of leap seconds. I presume this is all laid
down in POSIX somewhere.
And expecting this to work up to AD 200,000 seems an overkill (though
beyond 2038 would be helpful).
I believe for purposes of this discussion this is not a big deal.
Post by Charles Lindsey
z= Copied header fields
Verifiers MUST NOT use the header field names or copied values
for checking the signature in any way. Copied header field
values are for diagnostic use only.
Why ever not? I can think of examples where a verifier might find it
exceedingly useful to be aware of the original state of some header which
might have been changed somewhere en route. And what potential
interoperability arises if a verifier makes some use of this information?
Because z= field does NOT get rendered to the user, but rather the real
header fields will. It makes no sense that the z= field should be
verified and the others are not. Allowing otherwise would provide for a
malicious opportunity.
Post by Charles Lindsey
3.6 Key Management and Representation
public_key = dkim_find_key(q_val, d_val, s_val)
I do not find the operation 'dkim_find_key' defined or used anywhere else
in the draft.
3.6.1 Textual Representation
h= Acceptable hash algorithms
... Signers and Verifiers MUST
support the "sha256" hash algorithm. Verifiers MUST also support
the "sha1" hash algorithm.
Why MUST signers support the "sha256" hash algorithm (clearly, verifiers
MUST)? Surely a signer who, as a matter of policy, always chooses to use
sha-1 is not obliged to implement something he is never going to use?
Again, for interoperability. We picked one. That was the one.

I'm sorry, but I've run out of time and must be on a plane shortly.
Perhaps others will pick up the ones I did not answer.

Eliot
Charles Lindsey
2006-11-01 15:04:22 UTC
Permalink
Post by Eliot Lear
Hi Charles, and welcome. I can't answer many of your questions, but I
can certainly take a whack at a few.
Post by Charles Lindsey
3.5 The DKIM-Signature header field
Can the various tags appear in this header in any order? OTOH, why is
there not an insistence that the b= tag should come last (since it has to
be easily joined to and separated from the rest)?
I can't speak to anyone's implementation, but general SMTP folding rules
apply, no?
Yes, sensible folding would help. But it would make life easier for
implementors if the b= tag was always last. That is, in any case, where
most implementors are likely to put it simply because that is easiest, and
it is also the easiest place to ignore it when a verifier needs to grab
everything in the header _except_ that bit when constructing its hash.
Post by Eliot Lear
Post by Charles Lindsey
l= Body length count
I am very suspicious of the propriety of suggesting, in any IETF standard,
that it is legitimate to remove text from a message being conveyed
(certainly without the consent of the recipient). Surely marking it with
blood-red ink, or warnings in 32pt characters is as far as one should go?
What if it's executable code inserted by an attacker?
A fair point, but labelling it clearly should surely be enough. One
application of this tag is apparently for boilerplate added by a mailing
list expander, and if that boilerplate is there, it is presumably intended
that people should read it, and the list admin is entitled to be miffed if
the recipients do not even get to see it.
Post by Eliot Lear
Post by Charles Lindsey
z= Copied header fields
Verifiers MUST NOT use the header field names or copied values
for checking the signature in any way. Copied header field
values are for diagnostic use only.
Why ever not? I can think of examples where a verifier might find it
exceedingly useful to be aware of the original state of some header which
might have been changed somewhere en route. And what potential
interoperability arises if a verifier makes some use of this
information?
Because z= field does NOT get rendered to the user, but rather the real
header fields will. It makes no sense that the z= field should be
verified and the others are not. Allowing otherwise would provide for a
malicious opportunity.
Yes, but it is the policy module of the verifier, rather than the user,
which might find it useful. Some headers may legitimately get changed en
route. These should probably not be signed, but knowing what such a header
originally looked like might enable the policy module to spot some unusual
situation and work around it. The example I have in mind is the use of
UTF-8 in headers, being worked on by the ietf-EAI WG, where a message with
'internationalized' headers will have a special header
Header-Type: UTF8
If that message has to be downgraded en route, because it meets a legacy
MTA that does not support UTF8, then that header gets changed to
Header-Type: Downgraded
If a verifier can detect the change in that header, it can try to
'upgrade' the message to its original form before checking the signature.
I am not saying that is the only way that UTF8 headers might be made to
work with DKIM, but it is certainly one possible approach that should be
looked at.
Post by Eliot Lear
Post by Charles Lindsey
3.6.1 Textual Representation
h= Acceptable hash algorithms
... Signers and Verifiers MUST
support the "sha256" hash algorithm. Verifiers MUST also support
the "sha1" hash algorithm.
Why MUST signers support the "sha256" hash algorithm (clearly, verifiers
MUST)? Surely a signer who, as a matter of policy, always chooses to use
sha-1 is not obliged to implement something he is never going to use?
Again, for interoperability. We picked one. That was the one.
I think you misunderstand my point. AIUI, a signer is allowed to choose
either sha-1 or sha-256. So evidently all verifiers MUST be able to accept
whichever of those turns up.

But if a signer decides "my policy is always to use sha-1" then what is
the point of saying he MUST include code in his implementation which he is
never going to use. RFC 2119 says you can only say "MUST" where there is
the possibility of some interoperability problem or other harm. But in
this case there would be no way for an outsider with no access to the
signers machine to be aware that he had omitted that code. Hence the use
of "MUST" for the signer is meaningless, since violating it produces no
visible effects.

The same argument applies to the places where it says the signer MUST
support dns/ext and rsa. In those cases too, it is only the verifier who
MUST support everything the signer is allowed to send.
Post by Eliot Lear
I'm sorry, but I've run out of time and must be on a plane shortly.
Have a good trip!
--
Charles H. Lindsey ---------At Home, doing my own thing------------------------
Tel: +44 161 436 6131 
   Web: http://www.cs.man.ac.uk/~chl
Email: ***@clerew.man.ac.uk      Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K.
PGP: 2C15F1A9      Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Jon Callas
2006-10-31 09:26:57 UTC
Permalink
On 30 Oct 2006, at 1:42 PM, Charles Lindsey wrote:

I'm going to cherry-pick a few things, particularly the ones I think
I'm best suited to answer.
Post by Charles Lindsey
3.2 Tag=Value Lists
INFORMATIVE IMPLEMENTATION NOTE: Although the "plain text"
defined below (as "tag-value") only includes 7-bit
characters, an
implementation that wished to anticipate future standards would be
advised to not preclude the use of UTF8-encoded text in
tag=value
lists.
Those future standards are nearer than you think. The currently active
ietf-eai WG is charged with producing an experimental protocol for writing
headers in UTF-8. Would it not be wiser to make support for arbitrary
octets (except those essential for parsing such as ";", CR, LF, etc) a
MUST accept right from the start?
No, because we want DKIM to work with 7-bit-clean mail. We want
broad, fast deployment and that means working with old systems.
Post by Charles Lindsey
3.3.3 Other algorithms
Presumably there is nothing to prevent allowing PGP as the signing
algorithm in the future, if someone makes out a good case for it.
As Eliot has noted, PGP isn't a signing algorithm. PGP is a signing
protocol. Actually, to be even more correct, OpenPGP is a signing
protocol. PGP is software.

Nonetheless, DKIM is specifically designed to be orthogonal to
OpenPGP, S/MIME, or anything else. If you want to sign the content of
a message, they're appropriate for it.
Post by Charles Lindsey
3.5 The DKIM-Signature header field
Although your charter forbids you from discussing non-repudiation,
authorization, and other matters not strictly relevant for DKIM, it is to
be envisaged that other applications will arise from time to time
requiring signatures over headers, and it would be unfortunate if each
such application had to invent Yet-Another-Signing-Protocol when a simple
adaptation of what you have written would have sufficed. There are already
too many only-slightly-different-wheels in existence for us to be
inventing any more. Surely, a facility for signing headers should be
described as a tool which can then be used for various applications in
future, of which DKIM would be just the first? So why was this
approach
not taken?
In fact, you almost made it. The only features which might make it hard
for future applications that I can see are the appearance of "DKIM" in
your newly invented "DKIM-Signature" header (it rather needs an
'application' tag in the signature to indicate why the signature was
made), and the insistence that the d= and s= tags, which together identify
the owner of the key, should be syntactically of the form of domain-
names
(which might be totally inappropriate for those other applications, though
it should clearly be required when the application is DKIM).
Can the various tags appear in this header in any order? OTOH, why is
there not an insistence that the b= tag should come last (since it has to
be easily joined to and separated from the rest)?
h= Signed header fields
Why MUST NOT this list be empty? Suppose you want to sign the body, but
not any headers? Unusual, but perhaps sensible for some
application. No
interoperability arises.
Because you have to sign at least one header. Think of DKIM as a
header-signing system. It signs the body, too, but that's a means to
an end.

At the risk of using a postal metaphor (since email is surprisingly
different beast from postal mail), DKIM is an integrity protocol for
the envelope, not for the letter. Mechanically, it has to sign the
body, but that's again, not the goal.
Post by Charles Lindsey
i= Identity of the user or agent
Must this tag, if a <local-part> is present, be a valid working email
address?
No, it can be anything you want. It's really a note from the signing
domain to itself. Here's a scenario. You ring me up and tell me that
one of my users is misbehaving in email. You show me the email. I use
the i= to know whose knuckles to rap. But I may do so in a way that's
completely opaque to you.
Post by Charles Lindsey
l= Body length count
I am very suspicious of the propriety of suggesting, in any IETF standard,
that it is legitimate to remove text from a message being conveyed
(certainly without the consent of the recipient). Surely marking it with
blood-red ink, or warnings in 32pt characters is as far as one
should go?
The point of body lengths is that many systems add text to the end of
a message. In fact, the list server that is delivering this to you is
doing precisely that.

If the signing server (callas.org) sends it to the mailing list
(mipassoc.org) which then sends it to you, how do you know what the
mailing list added? That is the length.

The length is the opposite of what you think it is. It is an explicit
declaration by the responsible domain of what it is responsible for.
If a spammer adds stuff at the end, it should be removed. It is
saying that text that the author did not put there is not the
author's text and thus may be removed. (I'm playing fast and loose a
bit here, because it's not the author it's the author's exit mail
server.)
Post by Charles Lindsey
q= A colon-separated list of query methods used to retrieve the
public key
Clearly, the use of DNS or some similar global database is the only
sensible PKI that is workable for DKIM. But am I right in saying that this
tag does not preclude the use of other PKIs for other applications (e.g.
attached certificates, web-of-trust, private agreements between the
communicating parties, etc.)?
A couple of comments on this one.

DKIM is not a PKI. It has no trust model. It is a key-centric system.
There's nothing wrong with embodying those keys in certificates or
even gaffer tape, but that's not part of DKIM. However, yes, you are
right, it doesn't preclude using anything else.

Second, PKIs are not distribution mechanisms. DNS is a distribution
mechanism. Other possible mechanisms include LDAP, HTTP, FTP, Finger,
Gopher, etc.
Post by Charles Lindsey
Why MUST signers support "dns/ext" (clearly, verifiers MUST)? Surely a
signer who, as a matter of policy, always chooses to use some other query
method, is not obliged to implement something he is never going to use.
Then they're not doing DKIM. If you're doing DKIM, you MUST do DNS.
Conceivably, you might implement other mechanisms, too, but DNS is
the one distribution mechanism everyone has to do.
Post by Charles Lindsey
t= Signature Timestamp
... The format is the number of seconds
since 00:00:00 on January 1, 1970 in the UTC time zone. ...
Strictly speaking not true, since the usual UNIX algorithm for
calculating
this quantity takes no account of leap seconds. I presume this is all laid
down in POSIX somewhere.
And expecting this to work up to AD 200,000 seems an overkill (though
beyond 2038 would be helpful).
This is a definition. It has nothing to do with unix.
Post by Charles Lindsey
3.6.1 Textual Representation
h= Acceptable hash algorithms
... Signers and Verifiers MUST
support the "sha256" hash algorithm. Verifiers MUST also support
the "sha1" hash algorithm.
Why MUST signers support the "sha256" hash algorithm (clearly,
verifiers
MUST)? Surely a signer who, as a matter of policy, always chooses to use
sha-1 is not obliged to implement something he is never going to use?
It's for interoperability.

Here's what's going on there. SHA-1 is broken. It isn't so broken
that we're going to demand that no one use it, but it's broken. We
*want* you to use SHA-256. However, we are making a concession to
people who have some need or desire to use SHA-1 and making sure it
will all still work.
Post by Charles Lindsey
k= Key type (plain-text; OPTIONAL, default is "rsa"). Signers and
verifiers MUST support the "rsa" key type.
Why MUST signers support the "rsa" key type (clearly, verifiers MUST)?
Surely a signer who, as a matter of policy, always chooses to use some
other key type is not obliged to implement something he is never going to
use?
For interoperability. You have to have a key type that *everyone*
implements. Note that in this standard like many others, mandatory-to-
implement does not mean mandatory-to-use. It's perfectly fine for a
user to decide that they're only ever going to use ECDSA, and never
RSA. But the software that they use has to implement RSA because
other people are.
Post by Charles Lindsey
5.3 Normalize the Message to Prevent Transport Conversions
I found this section absolutely astounding.
Message bodies written in Non-ASCII charsets have been commonplace now for
12 or more years, and they are most readily represented as 8-bit. 8BITMIME
has been around for the same length of time and is now almost
universally
deployed. 8bit using 8BITMIME has become, or is well on the way to
becoming, the preferred CTE for charsets which will not fit into 7bits.
And yet you are now seriously proposing, for a protocol that needs to be
used in the great majority of future email messages if it is to fulfill
its purpose, to return to encodings that can be squashed into
7bits. That
is one monumental step backwards for the IETF.
You're confusing making sure that 7-bit systems don't break with
encouraging them. We're not encouraging them. But we're being
realistic and want DKIM to work with existing systems.
Post by Charles Lindsey
Sorry to have gone on at such length. But some of these issues seem of
importance, so I hope someone will take the trouble to reply.
It's all right. Sorry for being a bit abrupt, but you need to catch
up to what we're doing.

Go look at the web site at www.dkim.org, which will have a lot of
things in it to help you get up to speed. Look at the DKIM WG charter
at <http://www.ietf.org/html.charters/dkim-charter.html> which has
pointers to the WG documents.

You especially should look at the DKIM threats RFC, RFC 4686. Also
after that, look at the DKIM Service Overview. Those should help you
understand a lot of the mechanism that's in DKIM-base.

Jon
Charles Lindsey
2006-11-01 16:03:55 UTC
Permalink
I'm going to cherry-pick a few things, particularly the ones I think I'm
best suited to answer.
Post by Charles Lindsey
3.2 Tag=Value Lists
INFORMATIVE IMPLEMENTATION NOTE: Although the "plain text"
defined below (as "tag-value") only includes 7-bit characters, an
implementation that wished to anticipate future standards would be
advised to not preclude the use of UTF8-encoded text in tag=value
lists.
Those future standards are nearer than you think. The currently active
ietf-eai WG is charged with producing an experimental protocol for writing
headers in UTF-8. ...
No, because we want DKIM to work with 7-bit-clean mail. We want broad,
fast deployment and that means working with old systems.
Sure, but that is no reason for making it _not_ work with new systems. If
it is REQUIRED (rather then merely suggested, as in the present draft)
that UTF8-encoded text in tag=value lists is allowed, all existing mail
will still work, but so will the new. That should make everybody happy,
including the implementors who now have one test less to write.
Post by Charles Lindsey
3.5 The DKIM-Signature header field
h= Signed header fields
Why MUST NOT this list be empty? Suppose you want to sign the body, but
not any headers? Unusual, but perhaps sensible for some application. No
interoperability arises.
Because you have to sign at least one header. Think of DKIM as a
header-signing system. It signs the body, too, but that's a means to an
end.
Fair enough, but that sounds like grounds for SHOULD NOT rather than MUST
NOT, since nothing actually breaks if you violate it.
Post by Charles Lindsey
i= Identity of the user or agent
Must this tag, if a <local-part> is present, be a valid working email
address?
No, it can be anything you want.
OK.
Post by Charles Lindsey
l= Body length count
I am very suspicious of the propriety of suggesting, in any IETF standard,
that it is legitimate to remove text from a message being conveyed ...
The point of body lengths is that many systems add text to the end of a
message. In fact, the list server that is delivering this to you is
doing precisely that.
So I see, and a very sternly worded remark ("NOTE WELL: ...") it is. So
why do you want to even suggest that is would be legitimate for verifiers
(or their policy modules) to chop it out?
Post by Charles Lindsey
q= A colon-separated list of query methods used to retrieve the
public key
Clearly, the use of DNS or some similar global database is the only
sensible PKI that is workable for DKIM. But am I right in saying that this
tag does not preclude the use of other PKIs for other applications (e.g.
attached certificates, web-of-trust, private agreements between the
communicating parties, etc.)?
A couple of comments on this one.
DKIM is not a PKI. It has no trust model.
I would not say it has no trust model. The DNS is not all that easy to
spoof, but it still gives a reasonably strong assurance you have the
correct key, though clearly other PKIs are much stronger.
Second, PKIs are not distribution mechanisms.
A mattter ot definition, I think. I would regard a PKI as encompassing
either or both of a distribution and a trust mechanism.
Post by Charles Lindsey
Why MUST signers support "dns/ext" (clearly, verifiers MUST)? ...
See my reply to Eliot for this, and the similar wordings for hash and
encryption algorithms. I think you have misunderstood the point of my
concern. If it had said that signers SHOULD use dns/ext, ... rsa, ...
sha-256, etc, (at least when the application is DKIM) then that would have
been OK.
Post by Charles Lindsey
5.3 Normalize the Message to Prevent Transport Conversions
I found this section absolutely astounding.
Message bodies written in Non-ASCII charsets have been commonplace now for
12 or more years, and they are most readily represented as 8-bit. 8BITMIME
has been around for the same length of time and is now almost
universally
deployed. 8bit using 8BITMIME has become, or is well on the way to
becoming, the preferred CTE for charsets which will not fit into 7bits.
And yet you are now seriously proposing, for a protocol that needs to be
used in the great majority of future email messages if it is to fulfill
its purpose, to return to encodings that can be squashed into 7bits. That
is one monumental step backwards for the IETF.
You're confusing making sure that 7-bit systems don't break with
encouraging them. We're not encouraging them.
You say "signers SHOULD convert the message to a suitable MIME content
transfer encoding such as quoted-printable or base64". That sounds to me
like a pretty strong discouragement to continue using CTE 8bit.
... But we're being realistic and want DKIM to work with existing
systems.
Then is that case it has to work with CTE 8bit even when it gets changed
to Q-P or Base64 by some non-8BITMIME MTA en route (I doubt there are many
systems left that still do that, but there will be some).

And it is not as if this was a difficult problem to solve. All you have to
do is to decode any Q-P or Base64 encountered during the canonicalization
process and use that in the hash. By definition, changing the CTE does not
alter the "meaning" of the message, so no harm can arise from that. Why
was this not done?
Post by Charles Lindsey
Sorry to have gone on at such length. But some of these issues seem of
importance, so I hope someone will take the trouble to reply.
It's all right. Sorry for being a bit abrupt, but you need to catch up
to what we're doing.
Go look at the web site at www.dkim.org, .......
Yes, I have already looked at all the sources you mention.
--
Charles H. Lindsey ---------At Home, doing my own thing------------------------
Tel: +44 161 436 6131 
   Web: http://www.cs.man.ac.uk/~chl
Email: ***@clerew.man.ac.uk      Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K.
PGP: 2C15F1A9      Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
John Levine
2006-11-04 02:03:38 UTC
Permalink
Post by Charles Lindsey
writing headers in UTF-8. ...
No, because we want DKIM to work with 7-bit-clean mail. We want broad,
fast deployment and that means working with old systems.
Sure, but that is no reason for making it _not_ work with new systems.
It's still an open question how Unicode is going to show up in mail
headers, with 8 bit UTF8 being only one of multiple possibilities.
More likely there will be some kludge to smoosh it into 7 bits so it
can transit through old MTAs. I don't think anyone is opposed to DKIM
handling whatever happens, but I also don't think it's productive to
try to guess at this point which way it'll turn out.
Post by Charles Lindsey
l= Body length count
So I see, and a very sternly worded remark ("NOTE WELL: ...") it is. So
why do you want to even suggest that is would be legitimate for verifiers
(or their policy modules) to chop it out?
This has been very contentious. Personally, I will never put an l=
into a signature, but there are some vocal people who insist that it's
important for signtures to survive (some) mailing list software, so
it's there if they want it.
Post by Charles Lindsey
See my reply to Eliot for this, and the similar wordings for hash and
encryption algorithms. I think you have misunderstood the point of my
concern. If it had said that signers SHOULD use dns/ext, ... rsa, ...
sha-256, etc, (at least when the application is DKIM) then that would
have been OK.
The MUST is quite deliberate so that DKIM implementations will
interoperate. You're welcome to do whatever you want to exchange
messages with your friends, but for mail to everyone else, you have
to use SHA-256 and dns/ext because that's what you know they'll be
able to handle.
Post by Charles Lindsey
You say "signers SHOULD convert the message to a suitable MIME
content transfer encoding such as quoted-printable or base64". That
sounds to me like a pretty strong discouragement to continue using
CTE 8bit.
I think that what the wording should say is that messages must be valid
RFC2822 (or maybe RFC822) messages. This came up in connection with
what to do with messages with bare CR or LF characters, with the answer
being "don't do that".

You can sign whatever you want, but if the message is 7bit, your
signature is more likely to survive transit to the verifier.
Post by Charles Lindsey
And it is not as if this was a difficult problem to solve. All you have to
do is to decode any Q-P or Base64 encountered during the canonicalization
process ...
DKIM doesn't understand MIME. If DKIM signers and verifiers had to
unpack MIME parts they would be orders of magnitude more complicated.
In practice, I think that nearly everyone uses the simple body canon
anyway.

R's,
John
Tony Finch
2006-11-04 13:38:44 UTC
Permalink
Post by John Levine
It's still an open question how Unicode is going to show up in mail
headers, with 8 bit UTF8 being only one of multiple possibilities.
More likely there will be some kludge to smoosh it into 7 bits so it
can transit through old MTAs.
It's much less of an open question than you seem to think. Transmitting
unicode over the current 7bit braindamage is handled with RFC 2047 which
has been around for many years, so your last sentence above is wrong in
its use of the future tense. The only proposed change is to allow raw UTF8
when the MTA advertises the capability.
Post by John Levine
I don't think anyone is opposed to DKIM handling whatever happens, but I
also don't think it's productive to try to guess at this point which way
it'll turn out.
It just needs to be 8 bit clean.
Post by John Levine
I think that what the wording should say is that messages must be valid
RFC2822 (or maybe RFC822) messages. This came up in connection with
what to do with messages with bare CR or LF characters, with the answer
being "don't do that".
This implies DKIM cannot be used with 8bitmime or binarymime.

Tony.
--
f.a.n.finch <***@dotat.at> http://dotat.at/
FORTH TYNE DOGGER: WEST 5 TO 7, OCCASIONALLY GALE 8. MODERATE OR ROUGH. FAIR.
GOOD.
John L
2006-11-04 15:03:31 UTC
Permalink
Post by Tony Finch
Post by John Levine
I think that what the wording should say is that messages must be valid
RFC2822 (or maybe RFC822) messages. This came up in connection with
what to do with messages with bare CR or LF characters, with the answer
being "don't do that".
This implies DKIM cannot be used with 8bitmime or binarymime.
The "don't do that" was in the context of trying to kludge around MTAs
that will change bare CR to something else such as CR LF. 8bitmime has
the same problem, since I gather that some MTAs that will re-code it to
7bit on the fly if they find they're speaking to another MTA that doesn't
support 8bitmime. DKIM works fine if the transmission path doesn't change
the message much, for an extremely ill-defined definition of "much".

I presume we all agree that we do not want to go down the rathole of
trying to survive all the re-coding tricks that MTAs and gateways do.
Perhaps we should add language to the effect that you can sign whatever
you want, and if your transmission path is sufficiently clean, it'll work.

This also suggests that we should ditch the relaxed body encoding, since
the kinds of munging it tolerates is such a tiny fraction of the things
that actually happen to messages.

Regards,
John Levine, ***@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Information Superhighwayman wanna-be, http://johnlevine.com, Mayor
"I dropped the toothpaste", said Tom, crestfallenly.
Paul Hoffman
2006-11-04 15:53:41 UTC
Permalink
Post by Tony Finch
Post by John Levine
It's still an open question how Unicode is going to show up in mail
headers, with 8 bit UTF8 being only one of multiple possibilities.
More likely there will be some kludge to smoosh it into 7 bits so it
can transit through old MTAs.
It's much less of an open question than you seem to think.
Have you been following the EAI WG? If so, you have a different
interpretation of "open question" than others of us. If not, then you
really should do so before stating how things will be.
Post by Tony Finch
Transmitting
unicode over the current 7bit braindamage is handled with RFC 2047 which
has been around for many years, so your last sentence above is wrong in
its use of the future tense.
How does RFC 2047 handle non-ASCII on the left side of the @? Again,
maybe go read the documents and discussion in EAI. The WG discussion
is particularly useful for folks who are sure that they know the One
True Way to solve the problem.

--Paul Hoffman, Director
--Domain Assurance Council
Dave Crocker
2006-11-05 08:01:38 UTC
Permalink
Post by Paul Hoffman
Post by Tony Finch
Post by John Levine
It's still an open question how Unicode is going to show up in mail
headers, with 8 bit UTF8 being only one of multiple possibilities.
More likely there will be some kludge to smoosh it into 7 bits so it
can transit through old MTAs.
It's much less of an open question than you seem to think.
Have you been following the EAI WG? If so, you have a different
interpretation of "open question" than others of us. If not, then you
really should do so before stating how things will be.
So it is probably a good thing that he said "more likely". The semantics of
that language is rather different that "how things will be".

As you well remember, the model he described has a solid and legitimate
basis,both in terms of design and in terms of history.

That it is not the direction that EAI is taking is just fine. That the EAI
approach might be the actual choice by the Internet is also fine.

But it is always worth being careful about taking a position concerning a future
that relies on a particular standards effort succeeding, particularly one with a
difficult history.

Maybe it will reach a critical mass of deployment. That would be excellent, of
course.

But there is no guarantee that it will happen.


d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
Charles Lindsey
2006-11-06 10:47:45 UTC
Permalink
Post by John Levine
It's still an open question how Unicode is going to show up in mail
headers, with 8 bit UTF8 being only one of multiple possibilities.
More likely there will be some kludge to smoosh it into 7 bits so it
can transit through old MTAs. I don't think anyone is opposed to DKIM
handling whatever happens, but I also don't think it's productive to
try to guess at this point which way it'll turn out.
Paul Hoffman has answered this well enough. All that is needed is that
DKIM should not fail to work just because it finds some octet with bit 8
set, because it is clear that whatever happens regarding unicode, such
octets are surely going to appear.

The present draft contains some advice:
INFORMATIVE IMPLEMENTATION NOTE: Although the "plain text"
defined below (as "tag-value") only includes 7-bit characters, an
implementation that wished to anticipate future standards would be
advised to not preclude the use of UTF8-encoded text in tag=value
lists.
Presumably whoever wrote that was satisfied that allowing such
UTF8-encoded text would do no harm. In which case, you may as well make
allowing it mandatory (or at least allowing the full 8bits, since the
question of the actual code doesn't matter at the moment, so long as ASCII
is a subset of it).
Post by John Levine
Post by Charles Lindsey
l= Body length count
This has been very contentious. Personally, I will never put an l=
into a signature, but there are some vocal people who insist that it's
important for signtures to survive (some) mailing list software, so
it's there if they want it.
But the people who want it won't take kindly to having it deleted by
overly "helpful" verifier policy modules. Currently the draft suggests
that is a reasonable practice. It isn't, and the draft should not be
saying such things. By all means warn the user, and even provide the user
with tools to delete it. But don't chop bits out of his mail without his
approval.
Post by John Levine
The MUST is quite deliberate so that DKIM implementations will
interoperate. You're welcome to do whatever you want to exchange
messages with your friends, but for mail to everyone else, you have
to use SHA-256 and dns/ext because that's what you know they'll be
able to handle.
But that is not what the draft says. It currenty says, in effect, that
signers MAY use either SHA-1 or SHA-256 (and in consequence verifiers MUST
accept both - that bit is not in dispute). But you cannot say, at the same
time, that signers MAY use SHA-1 and MUST use SHA-256 (or MUST implement
SHA-256 even if they have no intention of generating it). RFC 2119 just
does not allow you to use MUST in those sorts of ways.
Post by John Levine
Post by Charles Lindsey
You say "signers SHOULD convert the message to a suitable MIME
content transfer encoding such as quoted-printable or base64". That
sounds to me like a pretty strong discouragement to continue using
CTE 8bit.
I think that what the wording should say is that messages must be valid
RFC2822 (or maybe RFC822) messages. ...
Messages haven't been valid RFC 2822 for a long time (ever since 8BITMIME
and now BINARYMIME).
Post by John Levine
You can sign whatever you want, but if the message is 7bit, your
signature is more likely to survive transit to the verifier.
DKIM doesn't understand MIME. If DKIM signers and verifiers had to
unpack MIME parts they would be orders of magnitude more complicated.
In practice, I think that nearly everyone uses the simple body canon
anyway.
Not at all. Going through the MIME structure of a message body and undoing
all Q-P or Bas64 encodings is fairly straightforward, and if you hash and
sign the result of doing that, then it is guaranteed to pass straight
through all those systems which (quite legitimately under RFC 1652)
re-encode stuff en route, without breaking the signature. I shall try to
write a demonstration implementation in the next day or so, and it
certainly won't be "orders of magnitude more complicated".

You said in a later message that the "relaxed" body canonicalization
should be ditched because the things it protected against rarely happen.
Surely it would be better to augment it with something that would make it
proof against things that regularly _do_ happen, and happen with the full
blessing of IETF standards.

And, moreover, I do not see why the 'simple' canonicalization is the
default (or even why it even exists at all, for both headers and bodies).
Can anybody suggest a scam or threat that would be facilitated if
"relaxed" rather than "simple" was used?
--
Charles H. Lindsey ---------At Home, doing my own thing------------------------
Tel: +44 161 436 6131 
   Web: http://www.cs.man.ac.uk/~chl
Email: ***@clerew.man.ac.uk      Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K.
PGP: 2C15F1A9      Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Charles Lindsey
2006-12-06 13:13:08 UTC
Permalink
That was quite some time ago, so to refresh your memories, I had been
claiming that DKIM-base would fail to verify if some message had its
Content-Transfer-Encoding changed en route, and that it proposed to get
around this by saying that all messages SHOULD be sent as 7bit, or encoded
into 7bit. In these days when 8BITMIME is now almost universally supported
and widely used (with BINARYMIME coming along as well), that seemed to be
a very backward step. So I proposed a canonicalization that would reverse
all those encodings before hashing.
Post by Charles Lindsey
Post by John Levine
You can sign whatever you want, but if the message is 7bit, your
signature is more likely to survive transit to the verifier.
But of course I don't want them to be "likely to survive". I want a system
that is robust enough that they "always survive".
Post by Charles Lindsey
Post by John Levine
DKIM doesn't understand MIME. If DKIM signers and verifiers had to
unpack MIME parts they would be orders of magnitude more complicated.
In practice, I think that nearly everyone uses the simple body canon
anyway.
Not at all. Going through the MIME structure of a message body and undoing
all Q-P or Bas64 encodings is fairly straightforward, and if you hash and
sign the result of doing that, then it is guaranteed to pass straight
through all those systems which (quite legitimately under RFC 1652)
re-encode stuff en route, without breaking the signature. I shall try to
write a demonstration implementation in the next day or so, and it
certainly won't be "orders of magnitude more complicated".
So it was an issue of whether such a canonicalization really would be
"orders of magnitude more complicated". Anyway, I have been working off
and on on this since then, and I have written a demonstration
implementation, as promised, of what it would take, which you can find at
<http://www.cs.man.ac.uk/~chl/uncode/uncode.html>.

It is less that 140 lines of Perl (excluding comments and empty lines).
Hardly any "orders of magnitude" in evidence there.
--
Charles H. Lindsey ---------At Home, doing my own thing------------------------
Tel: +44 161 436 6131 
   Web: http://www.cs.man.ac.uk/~chl
Email: ***@clerew.man.ac.uk      Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K.
PGP: 2C15F1A9      Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Stephen Farrell
2006-12-06 16:58:41 UTC
Permalink
So it looks to me like you're suggesting a new c14n algorithm for
the WG to consider.

If so, I think the proper course would be to write that up as a
draft-lindsey-dkim-foo and then we can see if we like it or not.
(While having the perl code is great, its not necessarily the
easiest thing for everyone to analyse.)

And we do have pluggability in base for c14n, so that if you're
right, and this new c14n produces significantly less brittle
signatures, then your new c14n algorithm would probably get
adopted fairly quickly in any case.

That suggests not trying to replace what we have in base, but
rather proposing your new scheme as an alternate. In the meantime,
we should proceed with getting more deployment experience based
on the current approach.

To me, that seems like a better approach than trying to make such
a significant last minute change, which I would have a problem with
in any case, mainly on the grounds that we'd need a reasonable time
to do a security analysis of any new c14n proposal.

Does that sound like a way forward?

Stephen.
On Mon, 06 Nov 2006 10:47:45 -0000, Charles Lindsey
That was quite some time ago, so to refresh your memories, I had been
claiming that DKIM-base would fail to verify if some message had its
Content-Transfer-Encoding changed en route, and that it proposed to get
around this by saying that all messages SHOULD be sent as 7bit, or
encoded into 7bit. In these days when 8BITMIME is now almost universally
supported and widely used (with BINARYMIME coming along as well), that
seemed to be a very backward step. So I proposed a canonicalization that
would reverse all those encodings before hashing.
Post by Charles Lindsey
Post by John Levine
You can sign whatever you want, but if the message is 7bit, your
signature is more likely to survive transit to the verifier.
But of course I don't want them to be "likely to survive". I want a
system that is robust enough that they "always survive".
Post by Charles Lindsey
Post by John Levine
DKIM doesn't understand MIME. If DKIM signers and verifiers had to
unpack MIME parts they would be orders of magnitude more complicated.
In practice, I think that nearly everyone uses the simple body canon
anyway.
Not at all. Going through the MIME structure of a message body and undoing
all Q-P or Bas64 encodings is fairly straightforward, and if you hash and
sign the result of doing that, then it is guaranteed to pass straight
through all those systems which (quite legitimately under RFC 1652)
re-encode stuff en route, without breaking the signature. I shall try to
write a demonstration implementation in the next day or so, and it
certainly won't be "orders of magnitude more complicated".
So it was an issue of whether such a canonicalization really would be
"orders of magnitude more complicated". Anyway, I have been working off
and on on this since then, and I have written a demonstration
implementation, as promised, of what it would take, which you can find
at <http://www.cs.man.ac.uk/~chl/uncode/uncode.html>.
It is less that 140 lines of Perl (excluding comments and empty lines).
Hardly any "orders of magnitude" in evidence there.
--Charles H. Lindsey ---------At Home, doing my own thing------------------------
Tel: +44 161 436 6131
Web: http://www.cs.man.ac.uk/~chl
PGP: 2C15F1A9 Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
_______________________________________________
NOTE WELL: This list operates according
tohttp://mipassoc.org/dkim/ietf-list-rules.html
Charles Lindsey
2006-12-07 10:07:21 UTC
Permalink
On Wed, 06 Dec 2006 16:58:41 -0000, Stephen Farrell
Post by Stephen Farrell
So it looks to me like you're suggesting a new c14n algorithm for
the WG to consider.
If so, I think the proper course would be to write that up as a
draft-lindsey-dkim-foo and then we can see if we like it or not.
(While having the perl code is great, its not necessarily the
easiest thing for everyone to analyse.)
Yes, I might do that in due course, but we need to toss the idea around
here a little bit more first (as we seem to be doing).
Post by Stephen Farrell
And we do have pluggability in base for c14n, so that if you're
right, and this new c14n produces significantly less brittle
signatures, then your new c14n algorithm would probably get
adopted fairly quickly in any case.
My concern is that people will tend not to implement stuff that is not in
the base standard. And any c14n has to be implemented at both ends in
order to be of any use.
--
Charles H. Lindsey ---------At Home, doing my own thing------------------------
Tel: +44 161 436 6131 
   Web: http://www.cs.man.ac.uk/~chl
Email: ***@clerew.man.ac.uk      Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K.
PGP: 2C15F1A9      Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Stephen Farrell
2006-12-07 10:27:27 UTC
Permalink
Charles,
Post by Charles Lindsey
Yes, I might do that in due course, but we need to toss the idea
around here a little bit more first (as we seem to be doing).
Good. I think that that's the right approach (and has the nice
side-effect of being a good check on whether we've done the
pluggability stuff well in base).

If draft-lindsey-dkim-better-c14n were ready for discussion in
Prague or before that'd be about right, IMO.
Post by Charles Lindsey
My concern is that people will tend not to implement stuff that is not
in the base standard. And any c14n has to be implemented at both ends in
order to be of any use.
I understand. However, in the case of xmlsig I think the 2nd c14n
spec (which was demonstrably more useful) basically won out even
though it was done about a year later.

So there is an existence proof that, where there's a real benefit,
then the market moves to use c14n that works.

And in this case, as was pointed out, and I think, agreed by you
above, yours is a late, though interesting, proposal, that needs some
more work, (e.g. in terms of security & performance analysis, field
testing etc.). That's a fairly strong argument for going with the
current proposal in base since that's already been through those
hoops.

Cheers,
S.
Hector Santos
2006-12-07 10:45:33 UTC
Permalink
Post by Charles Lindsey
On Wed, 06 Dec 2006 16:58:41 -0000, Stephen Farrell
My concern is that people will tend not to implement stuff that is not
in the base standard. And any c14n has to be implemented at both ends in
order to be of any use.
Speaking for myself, I can almost guarantee you, from on my long held
mail product design philosophical and ethical standpoint, we are not
going to implement anything that will require us to a) touch the
original mail integrity of the message and b) more importantly anything
that will require us to 'bring apart' a MIME message for the purpose of
DKIM signing.

This pretty much a non-starter for us. DKIM is not going to be the
"thing" that will change our product design in screwing around with
original mail integrity. The exception is any CR/LF conversion on
original submissions, but in principle passthrus will never will never
be touched, altered or whatever any in shape or form and I hope others
don't begin get into this dangerous game as well. I have confident this
is not going to happen.

Please excuse me for asking this, but are you attempting to develop a
"total product solution" with far reaching change requirements across
the board or a backend protocol that fits with the current framework as
best it can, yet offers the highest adoption potential?

---
HLS
John Levine
2006-11-06 15:45:01 UTC
Permalink
We're mostly talking past each other. DKIM is intended to be 8 bit
clean, and you can sign whatever you want. If you sign 8 bit data and
the path your message takes is transparent to 8 bit data, great, you
win, if not, you lose. In what I expect is the common case that you
don't know what MTAs are going to handle your message, downcoding
everything to 7bit is a lot more robust. Perhaps you could suggest
wording to make that clearer.
Post by Charles Lindsey
Post by John Levine
l= Body length count
But the people who want it won't take kindly to having it deleted by
overly "helpful" verifier policy modules. Currently the draft suggests
that is a reasonable practice. It isn't, and the draft should not be
saying such things. By all means warn the user, and even provide the user
with tools to delete it. But don't chop bits out of his mail without his
approval.
I don't see any place in the draft where it says anything about who's
approving or not approving of any particular operations. If I were
one of the band of optimists who thinks that "l=" will be useful, I
expect I would heartily approve of my MTA chopping off the unsigned
bits in my incoming mail.
Post by Charles Lindsey
[ re sha1 vs sha256 ]
Post by John Levine
The MUST is quite deliberate so that DKIM implementations will
interoperate. You're welcome to do whatever you want to exchange
messages with your friends, but for mail to everyone else, you have
to use SHA-256 and dns/ext because that's what you know they'll be
able to handle.
But that is not what the draft says.
Our intent is that signers have to sign with sha256 and, for backward
compatibility, can also add a sha1 signature. Verifiers have to
handle either. If the wording doesn't say that, we need to fix it.
Post by Charles Lindsey
Post by John Levine
DKIM doesn't understand MIME. If DKIM signers and verifiers had to
unpack MIME parts they would be orders of magnitude more complicated.
In practice, I think that nearly everyone uses the simple body canon
anyway.
Not at all. Going through the MIME structure of a message body and undoing
all Q-P or Bas64 encodings is fairly straightforward,
The people who have written DKIM signers and verifiers can comment on
how much complication would be involved if they had to add MIME
packers and unpackers to them.
Post by Charles Lindsey
And, moreover, I do not see why the 'simple' canonicalization is the
default (or even why it even exists at all, for both headers and bodies).
Can anybody suggest a scam or threat that would be facilitated if
"relaxed" rather than "simple" was used?
May I direct your attention to the lengthy discussions of these topics
in the list archives?

R's,
John
Arvel Hathcock
2006-11-06 21:28:20 UTC
Permalink
Post by John Levine
Post by Charles Lindsey
Not at all. Going through the MIME structure of a
message body and undoing all Q-P or encodings
is fairly straightforward,
The people who have written DKIM signers and verifiers
can comment on how much complication would be involved
if they had to add MIME packers and unpackers to them.
Dealing with MIME would make things much much more complex. I'm so glad
we don't have to deal with that. It would be a huge and needless
implementation burden IMO.

Arvel
B***@cox.com
2006-12-06 14:58:55 UTC
Permalink
Nice code, now during your testing how many messages (average message size today 3k) per second were you able to process and on what machine. I need something that can do about 1200 messages per second per second.
Thanks,

Bill Oxley
Messaging Engineer
Cox Communications
404-847-6397
-----Original Message-----
From: ietf-dkim-***@mipassoc.org [mailto:ietf-dkim-***@mipassoc.org] On Behalf Of Charles Lindsey
Sent: Wednesday, December 06, 2006 8:13 AM
To: DKIM
Subject: Re: Fwd: Re: [ietf-dkim] Introducing myself
That was quite some time ago, so to refresh your memories, I had been
claiming that DKIM-base would fail to verify if some message had its
Content-Transfer-Encoding changed en route, and that it proposed to get
around this by saying that all messages SHOULD be sent as 7bit, or encoded
into 7bit. In these days when 8BITMIME is now almost universally supported
and widely used (with BINARYMIME coming along as well), that seemed to be
a very backward step. So I proposed a canonicalization that would reverse
all those encodings before hashing.
Post by Charles Lindsey
Post by John Levine
You can sign whatever you want, but if the message is 7bit, your
signature is more likely to survive transit to the verifier.
But of course I don't want them to be "likely to survive". I want a system
that is robust enough that they "always survive".
Post by Charles Lindsey
Post by John Levine
DKIM doesn't understand MIME. If DKIM signers and verifiers had to
unpack MIME parts they would be orders of magnitude more complicated.
In practice, I think that nearly everyone uses the simple body canon
anyway.
Not at all. Going through the MIME structure of a message body and
undoing
all Q-P or Bas64 encodings is fairly straightforward, and if you hash and
sign the result of doing that, then it is guaranteed to pass straight
through all those systems which (quite legitimately under RFC 1652)
re-encode stuff en route, without breaking the signature. I shall try to
write a demonstration implementation in the next day or so, and it
certainly won't be "orders of magnitude more complicated".
So it was an issue of whether such a canonicalization really would be
"orders of magnitude more complicated". Anyway, I have been working off
and on on this since then, and I have written a demonstration
implementation, as promised, of what it would take, which you can find at
<http://www.cs.man.ac.uk/~chl/uncode/uncode.html>.

It is less that 140 lines of Perl (excluding comments and empty lines).
Hardly any "orders of magnitude" in evidence there.
--
Charles H. Lindsey ---------At Home, doing my own thing------------------------
Tel: +44 161 436 6131 
   Web: http://www.cs.man.ac.uk/~chl
Email: ***@clerew.man.ac.uk      Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K.
PGP: 2C15F1A9      Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
_______________________________________________
NOTE WELL: This list operates according to
http://mipassoc.org/dkim/ietf-list-rules.html
Charles Lindsey
2006-12-07 09:56:41 UTC
Permalink
Post by B***@cox.com
Nice code, now during your testing how many messages (average message
size today 3k) per second were you able to process and on what machine.
I need something that can do about 1200 messages per second per second.
Thanks,
I haven't done any speed testing yet, but will try some now.

But bear in mind my program was intended to show clearly what needs to be
done. You would not write a production version in Perl, though you might
use that Perl version as a guide to ensure that the production version had
covered all the corners.

But having said that, the question to ask is whether adding this feature
is going to make things significantly slower that all the other extra
stuff you have to add to implement DKIM. My expectation is that the
machine cycles for finding and reversing the MIME encodings will be much
less than the machine cycles consumed by performing the SHA-256 hash,
which has to be done over the whole body in any case.

But that bit is easily checked with my program, since the decoding and
hashing within the Perl library are written in C anyway.
--
Charles H. Lindsey ---------At Home, doing my own thing------------------------
Tel: +44 161 436 6131 
   Web: http://www.cs.man.ac.uk/~chl
Email: ***@clerew.man.ac.uk      Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K.
PGP: 2C15F1A9      Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Barry Leiba
2006-12-07 15:26:40 UTC
Permalink
Please stop this thread with this subject line, and see my next message.

--
Barry Leiba, DKIM Working Group chair (***@watson.ibm.com)
Barry Leiba
2006-12-07 15:31:01 UTC
Permalink
If this discussion is to continue, please use this subject line (or
another one like it that you prefer). Looking for it and keeping track
of it with that "introducing myself" subject is silly.

Apart from that, I'd like to see the discussion stop (though we chairs
are not going to put any feet down about it yet) until there's an
Internet Draft documenting a new proposed canonicalization. Then we
have something clear to discuss.

Also note that the base spec has completed last call and is scheduled
for the next IESG telechat on 12 Dec. Charles has brought this up in
last-call comments, and the IESG will be considering it, along with the
work the WG has already done on canonicalization.

--
Barry Leiba, DKIM Working Group chair (***@watson.ibm.com)
Charles Lindsey
2006-12-08 13:38:51 UTC
Permalink
Note change of Subject at request of Barry Leiba. The original thread
(Introducing Myself) was fine when I first joined this list with a long
list of issues I was concerned about, but it is well past its sell-by-date
now.

In due course, this needs an I-D draft with a definite proposal, but the
issues could use a little more informal discussion first.
Post by Charles Lindsey
Post by B***@cox.com
Nice code, now during your testing how many messages (average message
size today 3k) per second were you able to process and on what machine.
I need something that can do about 1200 messages per second per second.
Thanks,
I haven't done any speed testing yet, but will try some now.
I did some experimentation last night, but the outcome was that, whilst
Perl is a fine tool for setting out clearly the essential features of an
algorithm, it is of no help in estimating how fast it might run.

Being an interpretive language, which calls library subroutines to do the
interesting stuff, it can run terribly slowly, but then do something
blindingly fast when it hits a subroutine written in C.

So yes, I could just about tell that decoding Base64 was faster than
generating a SHA-256 hash, but not reliably by how much.

Essentially, however, the inner loop of what I am suggesting would look
like this:

Go through the input stream looking for CRLF.
When you find one
Look for whitespace before it (to delete is at per the 'relaxed'
c14n)
Look for '--' after it to check whether you have possibly reached
the end of the current part (of some multipart)
Copy what you have got to the output stream with or without decoding
of Q-P or Base 64 according to the CTE in force
Put the output stream through SHA-256.

Of all that, everything but looking for the '--' and the possible
decodoing of Q-P/Base64 have to be done for the present Relaxed c14n. My
belief is that the SHA-256 will consume most of the machine cycles in
that, with the search for the CRLF uning quite a lot. Hence the addition
of the decoding should not make a huge addition percentagewise. But it
would be necessary to rewrite the whole thing in C to get exact figures,
and it would be premature to do that just yet.
--
Charles H. Lindsey ---------At Home, doing my own thing------------------------
Tel: +44 161 436 6131 
   Web: http://www.cs.man.ac.uk/~chl
Email: ***@clerew.man.ac.uk      Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K.
PGP: 2C15F1A9      Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Wietse Venema
2006-12-06 16:36:32 UTC
Permalink
Post by Charles Lindsey
That was quite some time ago, so to refresh your memories, I had been
claiming that DKIM-base would fail to verify if some message had its
Content-Transfer-Encoding changed en route, and that it proposed to get
around this by saying that all messages SHOULD be sent as 7bit, or encoded
into 7bit. In these days when 8BITMIME is now almost universally supported
and widely used (with BINARYMIME coming along as well), that seemed to be
a very backward step. So I proposed a canonicalization that would reverse
all those encodings before hashing.
...
Post by Charles Lindsey
So it was an issue of whether such a canonicalization really would be
"orders of magnitude more complicated". Anyway, I have been working off
and on on this since then, and I have written a demonstration
implementation, as promised, of what it would take, which you can find at
<http://www.cs.man.ac.uk/~chl/uncode/uncode.html>.
It is less that 140 lines of Perl (excluding comments and empty lines).
Hardly any "orders of magnitude" in evidence there.
Actually, it's 128 lines. But that's a minor detail.

With real production MTAs such as Sendmail and Postfix, the MIME
processor takes about 900 lines of C code (sans comments, formatted
in the K&R coding style). That's 900 lines of opportunity for error.

My concern is about interoperabilitity. With the present design,
senders and recipients who exchange QP or Base64 content only need
bug-compatible MIME processors in their respective MUAs.

When DKIM signers and verifiers are requird to up-convert QP or
Base64 content before computing signatures, we also require that
all DKIM signers and verifiers have bug-compatible MIME processors.
That is, bug-compatible with every MUA.

Introducing this extra requirement is unlikely to help with the
successful adoption of DKIM.

Wietse
Dave Crocker
2006-12-06 17:51:30 UTC
Permalink
Post by Wietse Venema
Post by Charles Lindsey
That was quite some time ago, so to refresh your memories, I had been
claiming that DKIM-base would fail to verify if some message had its
Content-Transfer-Encoding changed en route, and that it proposed to get
...
Post by Wietse Venema
When DKIM signers and verifiers are requird to up-convert QP or
Base64 content before computing signatures, we also require that
all DKIM signers and verifiers have bug-compatible MIME processors.
That is, bug-compatible with every MUA.
Introducing this extra requirement is unlikely to help with the
successful adoption of DKIM.
Canonicalization has been recognized as a *very* challenging topic for at least
15 years of Internet mail work. It was a major focus for MIME, it was a major
focus for DomainKeys and it was a major focus for DKIM. (I'm sure it's been a
major focus elsewhere, but this list will suffice.)

My own summary is that we know we can trade between high qualitysimplicity and
robustness/fragility. There seems to be no perfect choice.

This invites infinite discussion. We should decline the invitation.

DKIM permits more than one canonicalization scheme to be defined. The current
set is the result of lengthy discussion and even experience. As Stephen notes,
the list can be extended, without requiring that we replace any existing entry.

If the current set proves problematic *in the field* then we can add more... later.

We most certainly do *not* need to add consideration of additional schemes to
the current public discussion, given that the focus now should be on adoption
and use of the current specification that has benefited from a couple of years
substantial effort.

That said, as Stephen notes, anyone is of course free to write an Internet-Draft
proposing additional schemes.


d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
Charles Lindsey
2006-12-07 10:41:32 UTC
Permalink
Post by Wietse Venema
Post by Charles Lindsey
That was quite some time ago, so to refresh your memories, I had been
claiming that DKIM-base would fail to verify if some message had its
Content-Transfer-Encoding changed en route,....
It is less that 140 lines of Perl (excluding comments and empty lines).
Hardly any "orders of magnitude" in evidence there.
Actually, it's 128 lines. But that's a minor detail.
Hmmm! I actually counted 138 :-( .
Post by Wietse Venema
My concern is about interoperabilitity. With the present design,
senders and recipients who exchange QP or Base64 content only need
bug-compatible MIME processors in their respective MUAs.
I have little sympathy with implementations that don't adhere to standards.
Post by Wietse Venema
When DKIM signers and verifiers are requird to up-convert QP or
Base64 content before computing signatures, we also require that
all DKIM signers and verifiers have bug-compatible MIME processors.
That is, bug-compatible with every MUA.
However, it is not as bad there as you suggest. Provided the c14n is
correctly implemented at both ends (and there is never any room for
incorrectly implemented c14n), it does not matter if some buggy MUA
produces bad Q-P or Base64, because the c14n will treat it the same way at
both ends. But the specification of the c14n has to be very tightly drawn.

It *does* matter if some MTA that downgrades 8BITMIME en route gets it
wrong. And I need to look into that (I have the source code of sendmail to
hand). Fortunately, RFC 2045 defines pretty exactly how Q-P and Base 64 is
to be done, especially as regards which CRLFs belong to the text being
(en/de)coded, and which to the structure of the multipart.
--
Charles H. Lindsey ---------At Home, doing my own thing------------------------
Tel: +44 161 436 6131 
   Web: http://www.cs.man.ac.uk/~chl
Email: ***@clerew.man.ac.uk      Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K.
PGP: 2C15F1A9      Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Hallam-Baker, Phillip
2006-12-06 18:52:15 UTC
Permalink
Agreed, indeed part of the logic for only providing two options in the first place was because we knew we might need a third later.

I don't think that the C18N issue we face is as challenging as the MIME one as we are initially only looking at edge to edge not end-to-end (or at least end-host-to-end-host, the end being the user not the machine).

The other expectation that we might have that did not apply to MIME was the expectation that deployment would drive conformance. The penalty for modifying the content is now much greater.
Post by B***@cox.com
-----Original Message-----
Sent: Wednesday, December 06, 2006 12:52 PM
To: DKIM
Subject: Re: Fwd: Re: [ietf-dkim] Introducing myself
Post by Wietse Venema
Post by Charles Lindsey
That was quite some time ago, so to refresh your memories,
I had been
Post by Wietse Venema
Post by Charles Lindsey
claiming that DKIM-base would fail to verify if some
message had its
Post by Wietse Venema
Post by Charles Lindsey
Content-Transfer-Encoding changed en route, and that it
proposed to
Post by Wietse Venema
Post by Charles Lindsey
get
...
Post by Wietse Venema
When DKIM signers and verifiers are requird to up-convert QP or
Base64 content before computing signatures, we also require
that all
Post by Wietse Venema
DKIM signers and verifiers have bug-compatible MIME processors.
That is, bug-compatible with every MUA.
Introducing this extra requirement is unlikely to help with the
successful adoption of DKIM.
Canonicalization has been recognized as a *very* challenging
topic for at least
15 years of Internet mail work. It was a major focus for
MIME, it was a major focus for DomainKeys and it was a major
focus for DKIM. (I'm sure it's been a major focus elsewhere,
but this list will suffice.)
My own summary is that we know we can trade between high
qualitysimplicity and robustness/fragility. There seems to be
no perfect choice.
This invites infinite discussion. We should decline the invitation.
DKIM permits more than one canonicalization scheme to be
defined. The current set is the result of lengthy discussion
and even experience. As Stephen notes, the list can be
extended, without requiring that we replace any existing entry.
If the current set proves problematic *in the field* then we
can add more... later.
We most certainly do *not* need to add consideration of
additional schemes to the current public discussion, given
that the focus now should be on adoption and use of the
current specification that has benefited from a couple of
years substantial effort.
That said, as Stephen notes, anyone is of course free to
write an Internet-Draft proposing additional schemes.
d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
_______________________________________________
NOTE WELL: This list operates according to
http://mipassoc.org/dkim/ietf-list-rules.html
John Levine
2006-12-06 20:25:39 UTC
Permalink
Post by Charles Lindsey
But of course I don't want them to be "likely to survive". I want a system
that is robust enough that they "always survive".
As I recall, we agreed that is specifically not a goal of DKIM. If
you want a signing scheme designed to survive all sorts of hostile
gateways, there's already S/MIME. The limited c18n in DKIM is
intended to survive only the most common sorts of transit relays.

Honestly, I'd be more inclined to go in the other direction and
deprecate the relaxed body c18n, since it is my impression that the
simple one works in practice for nearly any message that relaxed does,
and relaxed is more complicated and may be vulnerable to ASCII art
hacks.

R's,
John
Charles Lindsey
2006-12-07 10:25:35 UTC
Permalink
Post by John Levine
Post by Charles Lindsey
But of course I don't want them to be "likely to survive". I want a system
that is robust enough that they "always survive".
As I recall, we agreed that is specifically not a goal of DKIM. If
you want a signing scheme designed to survive all sorts of hostile
gateways, there's already S/MIME. The limited c18n in DKIM is
intended to survive only the most common sorts of transit relays.
Unfortunately, S/MIME already suffers from exactly the same
bug^H^H^Hfeature, which is why I was surprised to see that DKIM has
followed that same broken path.

DKIM will have no effect on the present spam/phishing/malware scene unless
it is widely adopted. It will not be widely adopted unless it is seen to
be robust. In particular, it will not be adopted in countries (esp those
in Asia) where the character sets used are totally unlike ASCII if it can
only be made to work by forcing everything to be sent as 7bit. They just
cannot survive in an environment where textual messages 'on the wire'
cannot easily be read in that form. They will just resort to "send 8bits
anyway" which is already happening, even with headers, to a large extent,
because 99.9% of the time it actually works like that without problem.

That is why the parallel EAI effort has been mentined so often in these
discussions, because it is pulling in exactly the opposite direction to
this WG, and it is the Chinese and the Japanese who are pulling the
hardest.
Post by John Levine
Honestly, I'd be more inclined to go in the other direction and
deprecate the relaxed body c18n, since it is my impression that the
simple one works in practice for nearly any message that relaxed does,
and relaxed is more complicated and may be vulnerable to ASCII art
hacks.
It has been standard practice in PGP, since its inception, to ignore
trailing whitespace (unless you explicitly ask it not to). I have never
heard of a Bad Guy who managed to create a correctly signed message
message with usefully different content by taking advantage of that.
--
Charles H. Lindsey ---------At Home, doing my own thing------------------------
Tel: +44 161 436 6131 
   Web: http://www.cs.man.ac.uk/~chl
Email: ***@clerew.man.ac.uk      Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K.
PGP: 2C15F1A9      Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Hector Santos
2006-12-07 11:14:16 UTC
Permalink
Post by Charles Lindsey
Post by John Levine
As I recall, we agreed that is specifically not a goal of DKIM. If
you want a signing scheme designed to survive all sorts of hostile
gateways, there's already S/MIME. The limited c18n in DKIM is
intended to survive only the most common sorts of transit relays.
Unfortunately, S/MIME already suffers from exactly the same
bug^H^H^Hfeature, which is why I was surprised to see that DKIM has
followed that same broken path.
I don't believe it did not follow the same path. Its one reason we are
interested in its completion and implementation. The MUA version is one
that was easily seen as broken simply because MUA has no control on
backend server hosting systems. That was a no-brainer.
Post by Charles Lindsey
DKIM will have no effect on the present spam/phishing/malware scene
unless it is widely adopted.
I disagree with that premise. DKIM can have a very highly effective
domain protection potential depending on the domain signature signing
policies. Its not a cure all, but can be highly effective in
eliminating the obvious failures. The DSAP I-D illustrates this:

http://tools.ietf.org/wg/dkim/draft-santos-dkim-dsap-00.txt
Post by Charles Lindsey
It will not be widely adopted unless it is seen to be robust.
which says that exclusivity does plays a big role in DKIM success.
Relaxing policies has always failed (been exploited) protocols with
relaxed provisions in practice. This idea is like adding $25K in home
security, yet, you leave a key under a potted plant on the porch.
Post by Charles Lindsey
In particular, it will not be adopted in countries
(esp those in Asia) where the character sets used are totally unlike
ASCII if it can only be made to work by forcing everything to be sent as
7bit.
You're assuming MUA are involved. Right? This all can be 100%
transparent if we keep DKIM out of the Presentation Layer.
Post by Charles Lindsey
They just cannot survive in an environment where textual messages
'on the wire' cannot easily be read in that form. They will just resort
to "send 8bits anyway" which is already happening, even with headers, to
a large extent, because 99.9% of the time it actually works like that
without problem.
Ok, if a transformation has to be done, then the DKIM owner will know
how do deal with this when it finds out that a failed outcome can not be
avoided. Or maybe they have worked out a route so that there will be
known middle ware who will alter and but also resign the mail. As long
as the final system validates and authorizes the domain signature , its
fine. You can't use certain technologies in all systems and that
applies in many areas.
Post by Charles Lindsey
That is why the parallel EAI effort has been mentined so often in these
discussions, because it is pulling in exactly the opposite direction to
this WG, and it is the Chinese and the Japanese who are pulling the
hardest.
I don't see how it applies and IMV, its no different when there are
other mid-stream transformation requirements and a DKIM owner who is or
not aware of it. Regardless of the transformation requirements, whether
its for Americans, British, Chinese or Martians, DKIM still needs
knowledge of any mail integrity change and signing/resigning entities.

===
HLS
Charles Lindsey
2006-12-08 13:38:23 UTC
Permalink
Note change of Subject at request of Barry Leiba. The original thread
(Introducing Myself) was fine when I first joined this list with a long
list
of issues I was concerned about, but it is well past its sell-by-date now.

In due course, this needs an I-D draft with a definite proposal, but the
issues
could use a little more informal discussion first.
Post by Hector Santos
In particular, it will not be adopted in countries (esp those in Asia)
where the character sets used are totally unlike ASCII if it can only
be made to work by forcing everything to be sent as 7bit.
You're assuming MUA are involved. Right? This all can be 100%
transparent if we keep DKIM out of the Presentation Layer.
No. AIUI DKIM is supposed to operate mainly between the 1st MTA after the
sending MUA and the last MTA before the receiving MUA (though smarter MUAs
that sign their own, or verify their own are welcome to try). But
nevertheless the form of the message as processed by MTAs 'on the wire'
does get looked at and needs to be looked at for all sorts of odd reasons,
and if the people looking at it find it squashed into 7bits that they
cannot readily interpret, they are going to revolt (and are doing so). It
is ridiculous that in the 21st century the mail protocols are still
cramming 8bit material into 7bits using abo9ut 6 different coding
mechanisms. That is the logjam that EAI is trying to break.
Post by Hector Santos
Ok, if a transformation has to be done, then the DKIM owner will know
how do deal with this when it finds out that a failed outcome can not be
avoided. Or maybe they have worked out a route so that there will be
known middle ware who will alter and but also resign the mail. As long
as the final system validates and authorizes the domain signature , its
fine. You can't use certain technologies in all systems and that
applies in many areas.
That might work for an 'owner' who regularly emails to the same group of
people and knows the routes by which they can be reached with 8BITMIME
supported all the way. But knowledge of detailed routes largely
disappeared 20 years ago when 'bang' paths went out of fashion, and any
need for such knowledge was finally outlawed with RFC 2821. And it is no
use at all if the 'owner' wants to communicate reliably, with the full
protection of DKIM, with arbitrarily selected people the world over.
Post by Hector Santos
That is why the parallel EAI effort has been mentined so often in these
discussions, because it is pulling in exactly the opposite direction to
this WG, and it is the Chinese and the Japanese who are pulling the
hardest.
I don't see how it applies and IMV, its no different when there are
other mid-stream transformation requirements and a DKIM owner who is or
not aware of it. Regardless of the transformation requirements, whether
its for Americans, British, Chinese or Martians, DKIM still needs
knowledge of any mail integrity change and signing/resigning entities.
How DKIM will work in an EAI context is not yet clear. For messages which
remain in an EAI (aka UTF8SMTP) environment throughout their journey, DKIM
should work OK, provided implemetors heed the advice in dkim-base to
maintain 8bit cleanliness in strings. But if a UTF8SMTP message has to be
downgraded by some MTA en route, then secondary signing by that MTA is
just not an option. The best solution so far seems to be for the verifier
to upgrade the message to its original form before checking it. That
should in principle be possible as things are being defined, but whether
with sufficient robustness to always work is as yet far from clear.
--
Charles H. Lindsey ---------At Home, doing my own thing------------------------
Tel: +44 161 436 6131 
   Web: http://www.cs.man.ac.uk/~chl
Email: ***@clerew.man.ac.uk      Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K.
PGP: 2C15F1A9      Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Dave Crocker
2006-12-08 22:24:54 UTC
Permalink
Post by Charles Lindsey
How DKIM will work in an EAI context is not yet clear. For messages
which remain in an EAI (aka UTF8SMTP) environment throughout their
journey, DKIM should work OK, provided implemetors heed the advice in
dkim-base to maintain 8bit cleanliness in strings. But if a UTF8SMTP
message has to be downgraded by some MTA en route, then secondary
signing by that MTA is just not an option. ...
It occurs to me that this is probably not a DKIM topic at all. I don't mean
that it isn't relevant to DKIM, but rather that it is not *specific* to DKIM.

EAI is a long-standing problem and canonicalization of email text is a
long-standing issue. I suspect that your focus is appropriate to a venue with
that mix of interest, rather than DKIM, per se.

Let me suggest this more strongly: These arenas of internationalization and
canonicalization have proved exceptionally difficult and the sort of thing you
are attempting to pursue *should* be of benefit -- and therefore interest -- to
the larger email text-handling community.

That said, I'm not sure what venue to suggest, and I don't want to guess, lest
it confuse things further.

d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
Charles Lindsey
2006-12-11 12:01:14 UTC
Permalink
Post by Dave Crocker
Post by Charles Lindsey
How DKIM will work in an EAI context is not yet clear. For messages
which remain in an EAI (aka UTF8SMTP) environment throughout their
journey, DKIM should work OK, provided implemetors heed the advice in
dkim-base to maintain 8bit cleanliness in strings. But if a UTF8SMTP
message has to be downgraded by some MTA en route, then secondary
signing by that MTA is just not an option. ...
It occurs to me that this is probably not a DKIM topic at all. I don't
mean that it isn't relevant to DKIM, but rather that it is not
*specific* to DKIM.
This is an area where the work of hte DKIM WG and that of the EAI WG may
conflict. I think all that can be done is for both groups to be on the
lookout for such conflicts, and to avoid them where possible.

So in this group, it would be reasonable to point out "that feature might
cause problems for EAI - here is how you might avoid them", and on the EAI
group "this feature might cause difficulties if the message were to be
DKIM-signed - here is a way to get around it".
--
Charles H. Lindsey ---------At Home, doing my own thing------------------------
Tel: +44 161 436 6131 
   Web: http://www.cs.man.ac.uk/~chl
Email: ***@clerew.man.ac.uk      Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K.
PGP: 2C15F1A9      Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Dave Crocker
2006-12-09 20:20:57 UTC
Permalink
offlist.
Post by Charles Lindsey
DKIM will have no effect on the present spam/phishing/malware scene
unless it is widely adopted. It will not be widely adopted unless it is
seen to be robust. In particular, it will not be adopted in countries
(esp those in Asia) where the character sets used are totally unlike
ASCII if it can only be made to work by forcing everything to be sent as
7bit. They just cannot survive in an environment where textual messages
'on the wire' cannot easily be read in that form. They will just resort
to "send 8bits anyway" which is already happening, even with headers, to
a large extent, because 99.9% of the time it actually works like that
without problem.
That is why the parallel EAI effort has been mentined so often in these
discussions, because it is pulling in exactly the opposite direction to
this WG, and it is the Chinese and the Japanese who are pulling the
hardest.
Scott Kitterman
2006-12-09 20:27:17 UTC
Permalink
Post by Dave Crocker
offlist.
or not.

Scott K
B***@cox.com
2006-12-08 15:05:14 UTC
Permalink
Slight disagreement
" No. AIUI DKIM is supposed to operate mainly between the 1st MTA after
the
sending MUA and the last MTA before the receiving MUA (though smarter
MUAs
that sign their own, or verify their own are welcome to try)."

I would suggest that DKIM operates between the signing MTA and the edge
boundary MTA of the receiving domain that is the certifier of DKIM
signatures which may be a smart MUA but is more likely a filtering MTA
at the ISP.

Thanks,



Bill Oxley
Messaging Engineer
Cox Communications
Douglas Otis
2006-12-08 17:32:52 UTC
Permalink
Post by B***@cox.com
Slight disagreement
Post by B***@cox.com
" No. AIUI DKIM is supposed to operate mainly between the 1st MTA
after the sending MUA and the last MTA before the receiving MUA
(though smarter MUAs that sign their own, or verify their own are
welcome to try)."
I would suggest that DKIM operates between the signing MTA and the
edge boundary MTA of the receiving domain that is the certifier of
DKIM signatures which may be a smart MUA but is more likely a
filtering MTA at the ISP.
Signing is not limited to the MTA, it can be done at the MUA. In
addition, protections afforded by DKIM requires the MUA to verify
signatures or obtain trustworthy signaling from the MDA. Blocking at
the MTA can not offer adequate protection. It would be wrong to
expect blocking at the MTA via restrictive policy produces a
significant effect on the level of abuse. Blocking via policy
definitely does _not_ offer much in the way of protection, but will
require a significant level of support explaining why various
messages are being rejected.

-Doug
Hector Santos
2006-12-08 17:55:29 UTC
Permalink
Post by Douglas Otis
Signing is not limited to the MTA, it can be done at the MUA. In
addition, protections afforded by DKIM requires the MUA to verify
signatures or obtain trustworthy signaling from the MDA.
I'm sorry. What section in the DKIM specification does it say it
"requires the MUA to verify signatures"?
Post by Douglas Otis
Blocking at the MTA can not offer adequate protection.
Why not?
Post by Douglas Otis
It would be wrong to expect blocking at the MTA via restrictive
policy produces a significant effect on the level of abuse.
Bad Guy uses my domain.com at site XYZ. Site XYZ looks up my policy and
finds he wasn't suppose to use my DOMAIN.

Whas wrong with expecting this is not a highly probably event?
Post by Douglas Otis
Blocking via policy definitely does _not_ offer
much in the way of protection, but will require a significant level of
support explaining why various messages are being rejected.
It will?

- A domain does not expect mail. Pretty good protection
- A domain requires mail to be sign. Pretty good protection

Those two along will cut down a very significant amount of the most
common exploitations without requiring any feedback whatsoever.

--
HLS
Douglas Otis
2006-12-08 21:42:13 UTC
Permalink
Post by Hector Santos
Post by Douglas Otis
Signing is not limited to the MTA, it can be done at the MUA. In
addition, protections afforded by DKIM requires the MUA to verify
signatures or obtain trustworthy signaling from the MDA.
I'm sorry. What section in the DKIM specification does it say it
"requires the MUA to verify signatures"?
The DKIM specification does not indicate how protective benefits are
derived. It surely does not say the MUA can not verify signatures.
DKIM use at the MUA has an advantage over SPF that must often depend
upon Received headers including the optional IP address of the SMTP
client.
Post by Hector Santos
Post by Douglas Otis
Blocking at the MTA can not offer adequate protection.
Why not?
You can not safely tell customers they are protected at the MTA from
spoofs with enforcement of DKIM's policies. Currently MUAs do not
adequately display email-addresses to safely allow visual
verification. When your customer assume they are protected with
assurances of MTA policy blocking, they more easily fall victim to
visual obfuscation techniques. These techniques are not handled by
DKIM when limited to just email-address domains. Protection must
also work when email-addresses are not using ASCII as well.
Post by Hector Santos
Post by Douglas Otis
It would be wrong to expect blocking at the MTA via restrictive
policy produces a significant effect on the level of abuse.
Bad Guy uses my domain.com at site XYZ. Site XYZ looks up my policy
and finds he wasn't suppose to use my DOMAIN.
Whats wrong with expecting this is not a highly probably event?
Because bad actors adapt where then you might then detect a few
percent of lazy ones as with SPF. All you have done is add more
overhead, where now removing it places your customers at greater risk
after hearing your initial promise of applying DKIM policy to block.
When bad actors adapt, can you stop making searches up label trees
for each message you receive? You have created two bad problems
based upon this assumption:

1) Increased overhead for the MTA for little benefit.
2) Increased susceptibility to spoofing due to the false claim of
protection.
Post by Hector Santos
Post by Douglas Otis
Blocking via policy definitely does _not_ offer much in the way of
protection, but will require a significant level of support
explaining why various messages are being rejected.
It will?
- A domain does not expect mail. Pretty good protection
- A domain requires mail to be sign. Pretty good protection
Only when message originators are recognized and verified by the MUA,
or by the MUA in conjunction with the MDA where annotation protection
can be achieved. Visual examination of a
Hector Santos
2006-12-08 23:05:57 UTC
Permalink
Post by Douglas Otis
Post by Hector Santos
I'm sorry. What section in the DKIM specification does it say it
"requires the MUA to verify signatures"?
The DKIM specification does not indicate how protective benefits are
derived. It surely does not say the MUA can not verify signatures.
DKIM use at the MUA has an advantage over SPF that must often depend
upon Received headers including the optional IP address of the SMTP client.
Whatever, it does not say DKIM "requires the MUA to verify signatures."
Post by Douglas Otis
Post by Hector Santos
Post by Douglas Otis
Blocking at the MTA can not offer adequate protection.
Whats wrong with expecting this is not a highly probably event?
Because bad actors adapt where then you might then detect a few percent
of lazy ones as with SPF.
Who's talking about SPF?
Post by Douglas Otis
Post by Hector Santos
Post by Douglas Otis
Blocking via policy definitely does _not_ offer much in the way of
protection, but will require a significant level of support
explaining why various messages are being rejected.
It will?
- A domain does not expect mail. Pretty good protection
- A domain requires mail to be sign. Pretty good protection
Only when message originators are recognized and verified by the MUA,
Nope, once again, MUA are not required. I can do the above easily at the
MDA.
Douglas Otis
2006-12-08 23:33:37 UTC
Permalink
Post by Hector Santos
Post by Douglas Otis
Post by Hector Santos
Post by Douglas Otis
Blocking via policy definitely does _not_ offer much in the way
of protection, but will require a significant level of support
explaining why various messages are being rejected.
It will?
- A domain does not expect mail. Pretty good protection
- A domain requires mail to be sign. Pretty good protection
Only when message originators are recognized and verified by the MUA,
Nope, once again, MUA are not required. I can do the above easily
at the MDA.
Is viewing the display name protected by this effort?

Is receiving non-ASCII email-addresses protected by this effort?

Are look-alike and cousin-domains prevented?

What happens when a domain wishes to allow users use of a mailing-
list? Should they setup different domain names, or use a sub-
domain? How will increased domain names of the same entity better
allow a recipient to detect a spoof?

You can not offer "pretty good protection" at the MTA based upon
policy blocking. Simple schemes remain where your customers continue
to be spoofed. Annotation at the MUA can prevent these schemes,
works with non-ASCII email-addresses, prevents look-alike and cousin
domains exploits, and permits the use of mailing-lists without
additional domain names.

Policy based blocking is not a desirable feature when it will likely
make the situation worse at substantial costs to resources.

-Doug
Hector Santos
2006-12-09 00:22:06 UTC
Permalink
Post by Douglas Otis
Post by Hector Santos
Nope, once again, MUA are not required. I can do the above easily at
the MDA.
Is viewing the display name protected by this effort?
N/A - MUA are not part of the process for this protocol ratification!
Post by Douglas Otis
Is receiving non-ASCII email-addresses protected by this effort?
Unrelated to the basic QUESTION of DOMAIN protection via SSP.
Post by Douglas Otis
Are look-alike and cousin-domains prevented?
Doesn't APPLY!
Post by Douglas Otis
What happens when a domain wishes to allow users use of a mailing-list?
Then the domain asking for trouble because MAILING-LIST are know to
break the integrity of the mail.
Post by Douglas Otis
Should they setup different domain names, or use a sub-domain?
Maybe, if that is what they want.
Post by Douglas Otis
How will increased domain names of the same entity better
allow a recipient to detect a spoof?
Doesn't apply. We are talking about 1 domain. One transaction at a time.
Post by Douglas Otis
You can not offer "pretty good protection" at the MTA based upon policy
blocking.
I sure can and if I didn't think so, I wouldn't be touching or even
looking at DKIM/SSP.
Post by Douglas Otis
Simple schemes remain where your customers continue to be
spoofed.
Not with a DKIM/SSP framework. Phished? sure. not SPOOFED. Plus
if a bad guy wanted to BREAK DKIM/SSP, all he has to do is AVOID using
it and stay away from DKIM/SSP protected domains!
Post by Douglas Otis
Annotation at the MUA can prevent these schemes,
This is like saying,

"Mom, can I let the rabid dog in the house? He looks so cute!"

Mom is not going to let johnny get in trouble if she can help it. But
just in case, she might give Johnny a rabbies shot.
Post by Douglas Otis
works with
non-ASCII email-addresses, prevents look-alike and cousin domains
exploits, and permits the use of mailing-lists without additional domain
names.
Out of scope! You're stuck with this because you are looking for a MUA
solution - unrealistic.
Post by Douglas Otis
Policy based blocking is not a desirable feature when it will likely
make the situation worse at substantial costs to resources.
But that premise is highly false. So what bother to continue? Its
friday! Thats why!

---
HLS
Douglas Otis
2006-12-09 02:17:46 UTC
Permalink
Post by Hector Santos
Post by Douglas Otis
Post by Hector Santos
Nope, once again, MUA are not required. I can do the above easily
at the MDA.
Is viewing the display name protected by this effort?
N/A - MUA are not part of the process for this protocol ratification!
What element is ratified? What benefit is established when
recipients may not understand what is signed, and who actually sent
the message? Do you care this blocking scheme fails to ensure
recipient protections? This should not be just a check mark on a
feature list.
Post by Hector Santos
Post by Douglas Otis
Is receiving non-ASCII email-addresses protected by this effort?
Unrelated to the basic QUESTION of DOMAIN protection via SSP.
Rather than just establishing restrictions, policy can also establish
associations and permit a "recognized" email-address 'X' to be used
in conjunction with signing-domain 'Y' to generate an annotated
message. Annotation can be a gold-star or placement into a folder or
specialized mailbox handled by _either_ the MDA or the MUA.
Recognition criteria can be established by the account's address-book
or a DAC list.
Post by Hector Santos
Post by Douglas Otis
Are look-alike and cousin-domains prevented?
Doesn't APPLY!
What is the goal then?
Post by Hector Santos
Post by Douglas Otis
What happens when a domain wishes to allow users use of a mailing-
list?
Then the domain asking for trouble because MAILING-LIST are know to
break the integrity of the mail.
When a message is signed by a mailing-list, this establishes their
signatures integrity. Annotation can make it clear the header used,
and that the originator is known (and trusted). When annotation
added by the MUA or MDA _is_ the protective mechanism, use of a
mailing-list is never a problem and nothing is broken. A mailing-
list only becomes a problem when subjected to a highly flawed
blocking scheme unable to provide adequate protection anyway.
Post by Hector Santos
Post by Douglas Otis
Should they setup different domain names, or use a sub-domain?
Maybe, if that is what they want.
This then increases recipient's confusion about what is being checked
with a blocking scheme.
Post by Hector Santos
Post by Douglas Otis
How will increased domain names of the same entity better allow a
recipient to detect a spoof?
Doesn't apply. We are talking about 1 domain. One transaction at a time.
This is about protecting recipients from being spoofed. This is not
about making sure a message jumps through a set of hoops.
Post by Hector Santos
Post by Douglas Otis
You can not offer "pretty good protection" at the MTA based upon
policy blocking.
I sure can and if I didn't think so, I wouldn't be touching or even
looking at DKIM/SSP.
It is hard to understand your goals. What constitutes "pretty good"
when recipients can still be spoofed?
Post by Hector Santos
Post by Douglas Otis
Simple schemes remain where your customers continue to be spoofed.
Not with a DKIM/SSP framework. Phished? sure. not SPOOFED.
Huh?
Post by Hector Santos
Plus if a bad guy wanted to BREAK DKIM/SSP, all he has to do is
AVOID using it and stay away from DKIM/SSP protected domains!
Bad actors would only need to provide the appearance of being from
the trusted domain. This is easily done with the proper HTML format
and images. It is also common to see display-names that make it
appear the email-address is being displayed, when it is not. The
visual tricks are vast.
Post by Hector Santos
Post by Douglas Otis
Annotation at the MUA can prevent these schemes,
This is like saying,
"Mom, can I let the rabid dog in the house? He looks so cute!"
Mom is not going to let johnny get in trouble if she can help it.
But just in case, she might give Johnny a rabbies shot.
Not really. Annotation offers a clear indication about which dogs
are known safe. Your scheme lets in any dog and offers no clue which
are safe. Don't expect Johnny to look at the dog's tattoo under
their lip.
Post by Hector Santos
Post by Douglas Otis
works with non-ASCII email-addresses, prevents look-alike and
cousin domains exploits, and permits the use of mailing-lists
without additional domain names.
Out of scope! You're stuck with this because you are looking for a
MUA solution - unrealistic.
This approach can be applied at _both_ the MDA and MUA. Message
annotation has a realistic chance of offering real protection. A
blocking scheme does not!
Post by Hector Santos
Post by Douglas Otis
Policy based blocking is not a desirable feature when it will
likely make the situation worse at substantial costs to resources.
But that premise is highly false.
DNS transactions required to discover whether policy is within any
domain level for any message, signed or not, is not trivial
overhead. In addition, there will be support calls dealing with
messages lost by various services. There will be issues created when
the use of EAI extensions is prevented. There is information leaked
to bad actors regarding the level of acceptance achieved at the MTA.
There are still problems related to white-listing and the properly
handling of DSNs not improved by a blocking strategy. Comparisons
between blocking and annotation are rather stark in terms of costs
and benefits.

-Doug
Dave Crocker
2006-12-08 22:32:20 UTC
Permalink
Post by B***@cox.com
I would suggest that DKIM operates between the signing MTA and the edge
boundary MTA of the receiving domain that is the certifier of DKIM
signatures which may be a smart MUA but is more likely a filtering MTA
at the ISP.
This is the sort of question that prompted my to add the construct of
Administrative Management Domain (ADMD) to the Internet Mail Architecture draft
<http://bbiw.net/specifications/draft-crocker-email-arch-05.html>

DKIM is envisioned as having signing done within an originating ADMD -- that is,
within a trust boundary associated with the author or at least with the author's
email posting service, and having validation done by a similarly-scoped
environment at the recipient end. (Validation by intermediaries is fine, but
hasn't been a focus.)

Exactly which host within an ADMD will do signing or validating is not
constrained by DKIM's design.

There are operationally realities that will constrain the choices for many
ADMDs, but this is not a matter of DKIM design, but rather of handling (or
perhaps MIShandling) behaviors within the ADMD.

Any other statements about host choices are a matter of preference, rather than
need. That the statements might prove true doesn't make them less an
administrative choice.

So, yeah, a scenario that is viewed as highly likely is signing by the outbound
boundary MTA and validating by the inbound boundary MTA. Lots of good reasons
for do that that. None of them makes this scenario mandatory, however.

d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
Loading...