On Sun, 20 Mar 2005 10:32:27 +0100, Karsten Nyblad <***@nospam.nospam>
wrote:
> Bill Gunshannon wrote:
>> In article <***@hyrrokkin>,
>> "Tom Linden" <***@kednos.com> writes:
>>
>>> On 19 Mar 2005 14:07:40 GMT, Bill Gunshannon <***@cs.uofs.edu> wrote:
>>>
>>>
>>>> In article <***@eisner.encompasserve.org>,
>>>> ***@SpamCop.net (Larry Kilgallen) writes:
>>>>
>>>>> In article <XMJ_d.4266$***@trnddc06>, John Santos
>>>>> <***@egh.com> writes:
>>>>>
>>>>>
>>>>>> This is where C loses. To write safe code in C, you have to
>>>>>> bounds-check everything, and verify all string and buffer lengths
>>>>>> and do it religiously.
>>>>>>
>>>>>> If all your code is reviewed and you always parameterize
>>>>>> *everything*
>>>>>> (so e.g. when someone changes the maximum size of a variable, every
>>>>>> thing uses the correct new size automatically) and you have a good
>>>>>> version control system so everything gets recompiled when it needs
>>>>>> to be, and your subroutines are all prepared to deal with the
>>>>>> memory faults they might get when someone passes a null-terminated
>>>>>> string and forgets to add the null at the end, etc. etc., then you
>>>>>> can use C safely. But isn't it easier to use a language that does
>>>>>> this for you and never forgets to check something, "because it's
>>>>>> just a little 1-byte local temp string and what can possibly go
>>>>>> wrong?"
>>>>>
>>>>> Ultimately, the problem is how does one _prove_ that all the proper
>>>>> steps were taken in a giant program ?
>>>>>
>>>>> Looking at a not-so-giant program, when DEC wrote an A1 Security
>>>>> Kernel
>>>>> for the high-end VAX, they wrote it in Pascal, with no RTL allowed,
>>>>> in order to support mathematical proof of code correctness.
>>>>
>>>> And, did the code reviews include looking at the source for the Pascal
>>>> compiler as well? If not, then you had no "mathematical proof of code
>>>> correctness". Some things can not be "proven" to any real degree off
>>>> certainty.
>>>
>>> There is still the Trojan horse issue as we have discussed before
>>> http://www.acm.org/classics/sep95/ by Ken Thompson
>> That has already been mentioned here inthe recent past but I
>> also meant it is possible for the compiler to make mistakes
>> too. And the more complex the language the more possible it
>> becomes for an error to sit silently by waiting for the right
>> combination of code to bring it to the surface.
>> bill
>
> Actually this goes beyond the compiler. You also have to prove the
> correctness of the computer, including firmware and microcode. Let us
> assume that you have actually proved that. Then you have to ensure that
> the computer always work correctly no matter what, e.g., there may not
> be a single gate in some chips, that fails every 10**20 times.
>
> Such proofs are so big that no human can carry them out without errors.
> Thus you have to have computer programs to check the proofs and then
> you have to prove the correctness of the theorem provers. But Kurt
> Gödel proved in 1931 that in order to prove the consistency of a logic
> you need a more powerful logic. There are theorem provers that are
> build to have a small trusted core that does the logical inferences, but
> even those small cores have been subject to errors.
>
> Then there is an other issue. Somehow you have to convert your
> requirement specification into logical formals before you can attempt to
> prove that these formulas are theorems, but there is no 100% safe way of
> to carry out that conversion. To me that is a much more severe problem
> than the problem of the correctness of the tools. Besides, once we have
> a method for developing correct programs, that method can also be
> applied to compilers and other tools.
>
> However, people tend to say that because there is no way of making the
> perfect system to guarantee the correctness of computer systems, the
> idea should be abandoned. I do not think so. We have to live with
> technology that fails. I cannot be 100% sure that the building, I am
> sitting in while writing this, will not collapse. I cannot be sure that
> the computer screen in front of me will not implode and spray me with
> glass splinter. Why not use these methods an get the orders of
> magnitude of better correctness that could most likely gained that way?
Somewhat reminiscent of Hilbert's response to the Russell Paradox.
http://plato.stanford.edu/entries/russell-paradox/