One of the key questions is that processing the last block with additional bits in a normal iterative hash function,there's the entropy of CV_(L-1) only n bits,namely a n-bit domain X maps to a n-
Forum: 2010 Reports
The writer of 2010/384 gives a conclusion that a narrow-pipe hash function will lose the entropy and the codomain will Reduce.However,the ideal random compression function C is designated by the writ
Forum: 2010 Reports
If the Ideal random compression functions C is always chosen and kept as a surjective function namely a onto mapping,what about the conclusion?
Forum: 2010 Reports
A few weeks ago, 7-15-2010, in a discussion in the TLS WG mailing list regarding the applicability of the results in this paper to TLS (and practical crypto in general) I posted the following comments
Forum: 2010 Reports
In many real-world situations an attacker is able to extend the length of messages with chosen text in an attempt to engineer a collision. In these cases, there may be 160 bits of entropy coming in th
Forum: 2010 Reports
marshray:
If you enter a random chaining value and a random message to SHA-1's compression function, you put in 512+160 bits of entropy. You would get with very high probability 160 bits of entropy
Forum: 2010 Reports
When evaluating the effect of this phenomenon on actual hash designs, it's probably important to look inside the block structure as well. For example, SHA-1:
for i from 0 to 79 // thanks Wikipedia
Forum: 2010 Reports
Dear Marsh,
From the link you have sent I quote:
"I did some Monte Carlo testing and found that my intuition was mostly wrong. The entropy loss effect is also observable with the Davies-Meyer cons
Forum: 2010 Reports
Yes, but some of the possible compression functions approximate a random function more closely than others. If we have information that a given construction does not, then we may be obligated not to a
Forum: 2010 Reports
marshray Wrote:
-------------------------------------------------------
> So the
>
> h_(i+1) = h_i + F(h_i, m_i)
>
> construction used by seems to be credited to
> Davies-Meyer. It's not
Forum: 2010 Reports
So the
h_(i+1) = h_i + F(h_i, m_i)
construction used by seems to be credited to Davies-Meyer. It's not used in MD2 but is used in MD4 (RFC 1186 October 1990), MD5, SHA-1, and SHA-2.
Howev
Forum: 2010 Reports
The model
h_(i+1) = compress(h_i, m_i)
may be a bit of an oversimplification. Notice that in actual SHA-256 the result of the compression(h_i, m_i) is added into the h_i input chaining values
Forum: 2010 Reports
I am receiving very good comments, suggestions and corrections from Jean-Philippe and Orr - so soon I will post corrected version of the paper.
But Duchman is right: this is more theoretical work a
Forum: 2010 Reports
The latest version I've read contains mistakes which are currently are under discussion with the authors (offline).
For example, the claims concerning the lose of entropy if you iterate the compres
Forum: 2010 Reports
Three remarks for this paper:
1. It is nice discovery to show that narrow-pipe hash functions can not ever replace random oracles. From this point of view, wide-pipe hash designs have obvious advan
Forum: 2010 Reports