On Feb 6, 3:27 am, Subcommandante XDelta <***@star.enet.dec.com>
wrote:
> On Sun, 05 Feb 2012 20:45:11 -0500, Arne Vajhøj <***@vajhoej.dk>
> wrote:
> :
>
>
>
> >>>> Take one year of those payments, give that money to Hoff and he will
> >>>> have ported VMS in a couple of afternoons. :-)
>
> >>> I don't think Hoff would signup for that.
>
> >> Whom else then, off the couch? - so to speak.
>
> >You will need a team.
>
> >OS experts, compiler experts, VMS experts, x86-64 experts.
>
> >Expensive.
>
> >Something in the magnitude of 5 million hours ~ 500 million dollars.
>
> >Arne
>
> Aw, heck, it's worth it. How many men and men weeks would that be
> roughly?
>
> If HP-UX, NSK and VMS aren't getting it in the neck, at least there'd
> be some commonalities and economies of scale, one would imagine.
>
> Taking a slight tangent, if Intel can get away with blue bloody CISC
> murder and continue to do so, a VAX64 chip family with an even
> crispier, crunchier, more orthogonal, more heavenly, instruction set
> than VAX32 (which was the very model of a modern major (CISC)
> architecture) is conceivable.
>
> How much would it cost, using back of the envelope metrics, to design
> and fabricate that according to the latest techniques?
>
> How many hours would it take to port GNU/Linux to the new
> architecture?
>
> (Not really inciting an A V. B debate)
>
> Comcomitantly, if HP disentangled the VMS Source Code of IP snarls and
> released it in the public domain, presumably an OSS/VMS Foundation
> could port it to VAX64, and presumably it would still take several
> million hours, but would it be do-able as a rather-extensive community
> project?
>
> Just tapping about in the dark, looking for that unitary photon of
> hope.
>
> On yet another tangent, even if 32bit is considered twee and old
> fashioned, how much would it cost, give or take a ball-park to rebirth
> VAX32 using the latest fabrication and design techniques?
WHether it be 32bit or 64bit, I wouldn't suggest fabricating it unless
the demand was a LOT bigger than I'd expect. There are different
approaches that might suit 'testing the water' initially.
Lots of people seem quite happy with pure software emulations in a
modern x86 box. Would one of those with an official seal of approval
and a proper global support infrastructure offer much that isn't
already available?
If emulation in software (even using the technology the industry
laughably calls a 'bare metal' HYPErvisor) isn't for you, then the
next simplest approach is probably to use the amazing technology now
available in a modern FPGA, probably with the system-level
interface(s) built in by the FPGA vendor to reduce the overall effort
required. It doesn't eliminate the core CPU design and validation and
support work, but it does in principle eliminate the system/bus
interface work and should make the actual silicon-related work someone
else's problem.
You can (at least in principle) already do this kind of thing with
modern ARM processors: ARM sell you the design details to put in a
suitable modern FPGA:
http://www.arm.com/products/processors/cortex-m/cortex-m1.php
Doing it with VAX (with the whole job done so the end user can buy a
system not a chip) instead of ARM 'just' needs folk to do the VAX-
specific stuff and the system integration (and the validation and...),
if you assume the FPGA people have done their stuff right. Oh, and a
few lawyers and such, assuming the VAX intellectual property doesn't
belong to you.
In terms of costs, you'd probably be paying a few hundred dollars per
FPGA for the chips, which of itself doesn't seem unreasonable or
uneconomic (how much is an IA64 or an AMD64?). If volumes went through
the roof (which they wouldn't) there are typically ways and means of
spending money to re-implement the same design in a lower cost per
chip process. But there are costs to that work.
FPGAs today are a lot cleverer than they were ten years ago, but VAXes
aren't all that complicated in modern terms, especially non-SMP VAXes.
This approach *could* have been tried maybe ten years ago. If it was
tried, it didn't catch on. Whether circumstances are sufficiently
different now that it might work is anybody's guess.
FPGA Alpha? Not so sure. The story was that Alpha performance relied
in part on co-design between chip internals and silicon fabrication
details. You lose that co-design capability with an FPGA, where
someone else has already laid out the silicon and all the 'Alpha'
designer could do would be to change the way the pieces connect
together.
Off you go.