Discussion:
Qbasic
(too old to reply)
philo
2016-02-05 17:25:55 UTC
Permalink
Last night I was going through some old paperwork and on the back of a
report I found an old (very simple) program I wrote in Qbasic.

Did a drive search and found a backup from a 386 I had worked on a long
time ago and it had my program on it along with a few games.

I mainly use Linux now but installed DosBox to see if the stuff would run.
It did!

Played Nibbles a few times just for laughs and enjoyed it.

I am not a gamer and never played anything newer than Tetris.
Gene Wirchenko
2016-02-05 22:56:20 UTC
Permalink
Post by philo
Last night I was going through some old paperwork and on the back of a
report I found an old (very simple) program I wrote in Qbasic.
Did a drive search and found a backup from a 386 I had worked on a long
time ago and it had my program on it along with a few games.
I mainly use Linux now but installed DosBox to see if the stuff would run.
It did!
It works fine. I have some QBASIC utilities that I wrote to help
me with software development. I now run them under DOSBox, and they
work fine. I have been doing so for years.

Some tips:

1) For maximum speed, you should probably crank up the cycles
setting. I run my stuff at 70,000 cycles.

2) High cycles will steal from other tasks. If I have two DOSBox
sessions each running at 70,000 cycles, audio playback can stutter on
my system.
Post by philo
Played Nibbles a few times just for laughs and enjoyed it.
I am not a gamer and never played anything newer than Tetris.
Sincerely,

Gene Wirchenko
philo
2016-02-06 00:13:33 UTC
Permalink
Post by Gene Wirchenko
Post by philo
Last night I was going through some old paperwork and on the back of a
report I found an old (very simple) program I wrote in Qbasic.
Did a drive search and found a backup from a 386 I had worked on a long
time ago and it had my program on it along with a few games.
I mainly use Linux now but installed DosBox to see if the stuff would run.
It did!
It works fine. I have some QBASIC utilities that I wrote to help
me with software development. I now run them under DOSBox, and they
work fine. I have been doing so for years.
1) For maximum speed, you should probably crank up the cycles
setting. I run my stuff at 70,000 cycles.
2) High cycles will steal from other tasks. If I have two DOSBox
sessions each running at 70,000 cycles, audio playback can stutter on
my system.
Thanks, but...
I've left it at the default of 3000 cycles and it's just fine

I don't expect I will be using it much.
Post by Gene Wirchenko
Post by philo
Played Nibbles a few times just for laughs and enjoyed it.
I am not a gamer and never played anything newer than Tetris.
Sincerely,
Gene Wirchenko
Michael Black
2016-02-06 03:40:15 UTC
Permalink
Post by philo
Post by Gene Wirchenko
Post by philo
Last night I was going through some old paperwork and on the back of a
report I found an old (very simple) program I wrote in Qbasic.
Did a drive search and found a backup from a 386 I had worked on a long
time ago and it had my program on it along with a few games.
I mainly use Linux now but installed DosBox to see if the stuff would run.
It did!
It works fine. I have some QBASIC utilities that I wrote to help
me with software development. I now run them under DOSBox, and they
work fine. I have been doing so for years.
1) For maximum speed, you should probably crank up the cycles
setting. I run my stuff at 70,000 cycles.
2) High cycles will steal from other tasks. If I have two DOSBox
sessions each running at 70,000 cycles, audio playback can stutter on
my system.
Thanks, but...
I've left it at the default of 3000 cycles and it's just fine
I don't expect I will be using it much.
No? I can see Bill Gates disappointment, "we could have cornered the
marekt place, come out with a version of QBASIC for Linux".

I wonder if Microsoft ever made a version of BASIC for XENIX?

Michael
philo
2016-02-06 04:32:06 UTC
Permalink
Post by Michael Black
Post by philo
Post by Gene Wirchenko
Post by philo
Last night I was going through some old paperwork and on the back of a
report I found an old (very simple) program I wrote in Qbasic.
Did a drive search and found a backup from a 386 I had worked on a long
time ago and it had my program on it along with a few games.
I mainly use Linux now but installed DosBox to see if the stuff would run.
It did!
It works fine. I have some QBASIC utilities that I wrote to help
me with software development. I now run them under DOSBox, and they
work fine. I have been doing so for years.
1) For maximum speed, you should probably crank up the cycles
setting. I run my stuff at 70,000 cycles.
2) High cycles will steal from other tasks. If I have two DOSBox
sessions each running at 70,000 cycles, audio playback can stutter on
my system.
Thanks, but...
I've left it at the default of 3000 cycles and it's just fine
I don't expect I will be using it much.
No? I can see Bill Gates disappointment, "we could have cornered the
marekt place, come out with a version of QBASIC for Linux".
I wonder if Microsoft ever made a version of BASIC for XENIX?
Michael
http://chiclassiccomp.org/docs/content/computing/Microsoft/Microsoft_Basic_8086Xenix_UserGuide.pdf
Gene Buckle
2016-02-11 18:22:45 UTC
Permalink
To: philo
From Newsgroup: alt.folklore.computers
Post by Michael Black
Post by philo
I don't expect I will be using it much.
No? I can see Bill Gates disappointment, "we could have cornered the
marekt place, come out with a version of QBASIC for Linux".
Here's "QuickBasic" for Linux: http://www.qb64.net/ :)

g.
--
Proud owner of F-15C 80-0007
http://www.f15sim.com - The only one of its kind.
http://www.diy-cockpits.org/coll - Go Collimated or Go Home.
Some people collect things for a hobby. Geeks collect hobbies.

ScarletDME - The red hot Data Management Environment
A Multi-Value database for the masses, not the classes.
http://scarlet.deltasoft.com - Get it _today_!
--- Synchronet 3.16c-Win32 NewsLink 1.103
The Retro Archive - telnet://bbs.retroarchive.org
Quadibloc
2016-02-12 01:16:01 UTC
Permalink
Post by Gene Buckle
Here's "QuickBasic" for Linux: http://www.qb64.net/ :)
They also have a version that runs on 32-bit Windows ... but *not* one for 64-bit
Windows!

John Savard
h***@bbs.cpcn.com
2016-02-12 02:49:12 UTC
Permalink
Post by Quadibloc
They also have a version that runs on 32-bit Windows ... but *not* one for 64-bit
Windows!
Wasn't there some sort of add-on for modern Windows to allow old DOS
applications, such as QBASIC, to run on it?
philo
2016-02-12 11:25:24 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Quadibloc
They also have a version that runs on 32-bit Windows ... but *not* one for 64-bit
Windows!
Wasn't there some sort of add-on for modern Windows to allow old DOS
applications, such as QBASIC, to run on it?
The "problem" with Windows is that the 64bit version , though it can run
32 bit apps, cannot run 16 bit apps.
gareth
2016-02-12 11:50:25 UTC
Permalink
Post by philo
The "problem" with Windows is that the 64bit version , though it can run
32 bit apps, cannot run 16 bit apps.
Do the 64-bit processors still boot into 16-bit mode, though?
Scott Lurndal
2016-02-12 14:02:43 UTC
Permalink
Post by gareth
Post by philo
The "problem" with Windows is that the 64bit version , though it can run
32 bit apps, cannot run 16 bit apps.
Do the 64-bit processors still boot into 16-bit mode, though?
They still boot in real-mode. Then the OS switches to protected
mode. Then the OS switches on paging. Then the OS switches to
"long mode". Between each step, various initialization functions
take place (GDT/LDT/IDT, enable the A20 gate, flush kbd controller, etc)

# jmp enable_longmode,[SS_CODE32] (Invoke address using GDT entry #2)
.byte 0x66 # code32 override
.byte 0xea # jmpi instruction
.long enable_longmode # Address to invoke
.word SS_CODE32 # GDT[2]

enable_longmode:
movl $SS_DATA, %eax
movl %eax, %ds
movl %eax, %es
movl %eax, %fs
movl %eax, %gs

# Set up the protected mode stack pointer

lss stack_segdesc, %esp # Load SS:ESP
movl cs_realmode, %eax # Get CS address

# Set up cr3 with PML4 base before paging is enabled

leal identitypml4, %eax # Get PML4 base address
subl $identitypdp, %eax
addl $0xa000, %eax # Bump to real base
movl %eax, %cr3

movl %cr4, %eax # Get CR4
orl $PAE_BIT, %eax # Set PAE for long mode
movl %eax, %cr4 # Set CR4

# Enable long mode
movl $0xC0000080, %ecx # EFER address
rdmsr # Read EFER Register into EAX
orl $LME_BIT, %eax # Enable long mode
wrmsr # Set EFER

# Enable paging to activate long mode
movl %cr0, %eax # Get CR0
mov $0x80050033,%eax
movl %eax, %cr0 # Set CR0
jmp 1f # Clear pipeline


1:
movl memsize, %eax # Amount of e820 buf space remaining
movzwl ap_start, %ebx # BSP or AP flag
movl cs_realmode, %esi # Real-mode base address

# Here, we are still in the compatibility
# mode. A far jump is needed to actually activate the 64bit mode.

# jmp 0x10000,[SS_CODE64] (Invoke address using GDT entry #2)

.byte 0xea # jmpi instruction
.long DVMM_START # Address to invoke, dvmmstart is based at DVMM_START
.word SS_CODE64 # GDT[2]


lea bummer, %si
call print
9: hlt
jmp 9b

bummer:
.string "Fell through jump to C++ code - halting\r\n"
gareth
2016-02-12 23:24:28 UTC
Permalink
"Scott Lurndal" <***@slp53.sl.home> wrote in message news:7Clvy.35645$***@fx09.iad...

Thank-you, something to study and think about.
Post by Scott Lurndal
Post by gareth
Post by philo
The "problem" with Windows is that the 64bit version , though it can run
32 bit apps, cannot run 16 bit apps.
Do the 64-bit processors still boot into 16-bit mode, though?
They still boot in real-mode. Then the OS switches to protected
mode. Then the OS switches on paging. Then the OS switches to
"long mode". Between each step, various initialization functions
take place (GDT/LDT/IDT, enable the A20 gate, flush kbd controller, etc)
# jmp enable_longmode,[SS_CODE32] (Invoke address using GDT entry #2)
.byte 0x66 # code32 override
.byte 0xea # jmpi instruction
.long enable_longmode # Address to invoke
.word SS_CODE32 # GDT[2]
movl $SS_DATA, %eax
movl %eax, %ds
movl %eax, %es
movl %eax, %fs
movl %eax, %gs
# Set up the protected mode stack pointer
lss stack_segdesc, %esp # Load SS:ESP
movl cs_realmode, %eax # Get CS address
# Set up cr3 with PML4 base before paging is enabled
leal identitypml4, %eax # Get PML4 base address
subl $identitypdp, %eax
addl $0xa000, %eax # Bump to real base
movl %eax, %cr3
movl %cr4, %eax # Get CR4
orl $PAE_BIT, %eax # Set PAE for long mode
movl %eax, %cr4 # Set CR4
# Enable long mode
movl $0xC0000080, %ecx # EFER address
rdmsr # Read EFER Register into EAX
orl $LME_BIT, %eax # Enable long mode
wrmsr # Set EFER
# Enable paging to activate long mode
movl %cr0, %eax # Get CR0
mov $0x80050033,%eax
movl %eax, %cr0 # Set CR0
jmp 1f # Clear pipeline
movl memsize, %eax # Amount of e820 buf space remaining
movzwl ap_start, %ebx # BSP or AP flag
movl cs_realmode, %esi # Real-mode base address
# Here, we are still in the compatibility
# mode. A far jump is needed to actually activate the 64bit mode.
# jmp 0x10000,[SS_CODE64] (Invoke address using GDT entry #2)
.byte 0xea # jmpi instruction
.long DVMM_START # Address to invoke, dvmmstart is based at DVMM_START
.word SS_CODE64 # GDT[2]
lea bummer, %si
call print
9: hlt
jmp 9b
.string "Fell through jump to C++ code - halting\r\n"
Quadibloc
2016-02-12 17:50:15 UTC
Permalink
Post by philo
The "problem" with Windows is that the 64bit version , though it can run
32 bit apps, cannot run 16 bit apps.
And that has to do with a limitation of Intel and AMD 64-bit processors, that
they have given up the capability to switch from 64-bit mode to 16-bit mode
without rebooting.

One would have thought Intel would have learned from the debacle with the 80286
not to make that mistake again.

John Savard
J. Clarke
2016-02-12 11:53:52 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Quadibloc
They also have a version that runs on 32-bit Windows ... but *not* one for 64-bit
Windows!
Wasn't there some sort of add-on for modern Windows to allow old DOS
applications, such as QBASIC, to run on it?
Windows comes with a virtual machine, however Microsoft in recent
releases has crippled it in enough ways to make it less useful than it
once was. It was used to support a "Virtual XP" in some versions of
Windows 7 that allowed 32-bit-only code to run, but that broke with some
16-bit code.
jmfbahciv
2016-02-12 13:40:03 UTC
Permalink
Post by J. Clarke
Post by h***@bbs.cpcn.com
Post by Quadibloc
They also have a version that runs on 32-bit Windows ... but *not* one for 64-bit
Windows!
Wasn't there some sort of add-on for modern Windows to allow old DOS
applications, such as QBASIC, to run on it?
Windows comes with a virtual machine, however Microsoft in recent
releases has crippled it in enough ways to make it less useful than it
once was. It was used to support a "Virtual XP" in some versions of
Windows 7 that allowed 32-bit-only code to run, but that broke with some
16-bit code.
Old games are installed in an X86 directory.

/BAH
Quadibloc
2016-02-12 17:53:11 UTC
Permalink
Post by jmfbahciv
Old games are installed in an X86 directory.
Really old games - 16-bit programs - won't work from there either.

The x86 directory is used even for newer programs that run directly on the
processor, instead of being in the newer "managed code" that Microsoft would like
people to switch to - a sort of P-code.

John Savard
jmfbahciv
2016-02-13 14:37:12 UTC
Permalink
Post by Quadibloc
Post by jmfbahciv
Old games are installed in an X86 directory.
Really old games - 16-bit programs - won't work from there either.
I'll have to look at the hard/software requirements of the games I have.
I've run games on Win7 system which has XP requirements spec'ed on the
game box.
Post by Quadibloc
The x86 directory is used even for newer programs that run directly on the
processor, instead of being in the newer "managed code" that Microsoft would like
people to switch to - a sort of P-code.
I wonder why unless their libraries lagged behind the newer hardware.

/BAH
Scott Lurndal
2016-02-12 14:04:37 UTC
Permalink
Post by J. Clarke
Post by h***@bbs.cpcn.com
Post by Quadibloc
They also have a version that runs on 32-bit Windows ... but *not* one for 64-bit
Windows!
Wasn't there some sort of add-on for modern Windows to allow old DOS
applications, such as QBASIC, to run on it?
Windows comes with a virtual machine, however Microsoft in recent
releases has crippled it in enough ways to make it less useful than it
once was. It was used to support a "Virtual XP" in some versions of
Windows 7 that allowed 32-bit-only code to run, but that broke with some
16-bit code.
Note that when AMD added 64-bit support to the Intel Architecture (long mode),
the vm86 hardware support (used by 32-bit windows to support real-mode code[*]) was
no longer available in long-mode.

[*] Called Wow (Windows on Windows) in NT4.0 which became Win2k.
Quadibloc
2016-02-12 17:55:33 UTC
Permalink
Post by Scott Lurndal
Note that when AMD added 64-bit support to the Intel Architecture (long mode),
the vm86 hardware support (used by 32-bit windows to support real-mode code[*]) was
no longer available in long-mode.
And when Intel gave up on Itanium being the wave of the future, and decided to
accept AMD's version, calling it EM64T, despite having the experience of the
80286 debacle, they failed to correct such an obvious mistake.

Total, eternal upwards compatibility. That is what we want, so we don't have to
run out and buy new software unless we _want_ to.

John Savard
Charlie Gibbs
2016-02-12 20:49:55 UTC
Permalink
Post by Quadibloc
Post by Scott Lurndal
Note that when AMD added 64-bit support to the Intel Architecture (long
mode), the vm86 hardware support (used by 32-bit windows to support
real-mode code[*]) was no longer available in long-mode.
And when Intel gave up on Itanium being the wave of the future, and decided
to accept AMD's version, calling it EM64T, despite having the experience of
the 80286 debacle, they failed to correct such an obvious mistake.
Total, eternal upwards compatibility. That is what we want, so we don't have
to run out and buy new software unless we _want_ to.
"Intel put the 'backward' in 'backward compatibility'." -- me

Between VirtualBox and dosemu on my Linux box, I have no complaints.
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!
Bob Eager
2016-02-12 23:21:53 UTC
Permalink
Post by Charlie Gibbs
Post by Quadibloc
Post by Scott Lurndal
Note that when AMD added 64-bit support to the Intel Architecture
(long mode), the vm86 hardware support (used by 32-bit windows to
support real-mode code[*]) was no longer available in long-mode.
And when Intel gave up on Itanium being the wave of the future, and
decided to accept AMD's version, calling it EM64T, despite having the
experience of the 80286 debacle, they failed to correct such an obvious
mistake.
Total, eternal upwards compatibility. That is what we want, so we don't
have to run out and buy new software unless we _want_ to.
"Intel put the 'backward' in 'backward compatibility'." -- me
And it still has the half byte carry bit, I believe. For compatibility
with the 4004.
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Quadibloc
2016-02-12 23:55:27 UTC
Permalink
Post by Bob Eager
And it still has the half byte carry bit, I believe. For compatibility
with the 4004.
No, that would be for the Decimal Adjust Accumulator instruction.

Which couldn't go back further than the 8008.

John Savard
Bob Eager
2016-02-13 00:26:17 UTC
Permalink
Post by Quadibloc
Post by Bob Eager
And it still has the half byte carry bit, I believe. For compatibility
with the 4004.
No, that would be for the Decimal Adjust Accumulator instruction.
Which couldn't go back further than the 8008.
Oh well, nearly as far!
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Scott Lurndal
2016-02-12 21:09:47 UTC
Permalink
Post by Quadibloc
Post by Scott Lurndal
Note that when AMD added 64-bit support to the Intel Architecture (long
mode), the vm86 hardware support (used by 32-bit windows to support
real-mode code[*]) was no longer available in long-mode.
And when Intel gave up on Itanium being the wave of the future, and decided
to accept AMD's version, calling it EM64T, despite having the experience of
the 80286 debacle, they failed to correct such an obvious mistake.
It wasn't a mistake, it was a smart decision. You want to run 30 year
old software, buy a 30-year old CPU.

The VM86 stuff never quite worked right anyway, and it was very painful
to implement in the processor and operating software. We used it at
SGI in the late 90's in an early hypervisor product called Crucible that
never made it out the door (but you could run linux _and_ windows on a
two-core intel processor at the same time).
Stephen Sprunk
2016-02-12 23:11:06 UTC
Permalink
Post by Scott Lurndal
Post by Quadibloc
Post by Scott Lurndal
Note that when AMD added 64-bit support to the Intel
Architecture (long mode), the vm86 hardware support (used by
32-bit windows to support real-mode code[*]) was no longer
available in long-mode.
And when Intel gave up on Itanium being the wave of the future,
and decided to accept AMD's version, calling it EM64T, despite
having the experience of the 80286 debacle, they failed to
correct such an obvious mistake.
It wasn't a mistake, it was a smart decision. You want to run 30
year old software, buy a 30-year old CPU.
You don't need a 30-year-old CPU; a modern CPU will run 16-bit code just
fine under a 16-bit or 32-bit OS, just not under a 64-bit OS.

S
--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
h***@bbs.cpcn.com
2016-02-13 07:27:19 UTC
Permalink
Post by Stephen Sprunk
You don't need a 30-year-old CPU; a modern CPU will run 16-bit code just
fine under a 16-bit or 32-bit OS, just not under a 64-bit OS.
Can the 64 bit OS be 'adjusted' somehow to accept 16 bit code?
J. Clarke
2016-02-13 11:26:36 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Stephen Sprunk
You don't need a 30-year-old CPU; a modern CPU will run 16-bit code just
fine under a 16-bit or 32-bit OS, just not under a 64-bit OS.
Can the 64 bit OS be 'adjusted' somehow to accept 16 bit code?
The limitation there is a choice that Intel made in the hardware
design--if enough cards and letters arrive at Intel headquarters maybe
they'll revise that decision. The OS can't fix it except by providing
emulation, and Microsoft seems to have decided that there isn't enough
demand to justify it. DOSBOX, which is freeware, will run quite a lot
of 16 bit code on 64 bit Windows. It's not a PC emulator, it's a DOS-
on-a-PC emulator which means that the DOS prompt is rather limited.
Stephen Sprunk
2016-02-13 18:24:16 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Stephen Sprunk
You don't need a 30-year-old CPU; a modern CPU will run 16-bit code
just fine under a 16-bit or 32-bit OS, just not under a 64-bit OS.
Can the 64 bit OS be 'adjusted' somehow to accept 16 bit code?
16-bit protected mode applications could run under a long mode OS, in
theory, but there are so few* that it's not worth the effort.

16-bit real mode applications can only run in real or virtual modes, but
neither is available under a long mode OS. Neither AMD nor Intel seems
interested in removing that limitation.

16-bit unreal mode applications can only run in real mode; they can't
even run under a protected mode OS, much less a long mode one.

* Except Win16 installers for older Win32 apps; Win64 recognizes (most
of) these and substitutes a special Win32 installer that can read the
Win16 installers' data files and emulate their behavior. Clever.

S
--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
h***@bbs.cpcn.com
2016-02-13 07:26:04 UTC
Permalink
Post by Scott Lurndal
It wasn't a mistake, it was a smart decision. You want to run 30 year
old software, buy a 30-year old CPU.
Just as an aside, in the mainframe world, we routinely ran 30 or even
40 year old software. The 30 y/o stuff runs native mode. If the 40 y/o
stuff was written for S/360, it will run native mode. If it was written
for a prior generation of hardware, it would run under emulation.
Peter Flass
2016-02-13 14:43:50 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Scott Lurndal
It wasn't a mistake, it was a smart decision. You want to run 30 year
old software, buy a 30-year old CPU.
Just as an aside, in the mainframe world, we routinely ran 30 or even
40 year old software. The 30 y/o stuff runs native mode. If the 40 y/o
stuff was written for S/360, it will run native mode. If it was written
for a prior generation of hardware, it would run under emulation.
Most companies try to develop products that give the customer what he
wants. Microsoft develops products that give the customer what microsoft
wants.
--
Pete
J. Clarke
2016-02-13 15:29:22 UTC
Permalink
In article <630582847.477066945.696306.peter_flass-
Post by Peter Flass
Post by h***@bbs.cpcn.com
Post by Scott Lurndal
It wasn't a mistake, it was a smart decision. You want to run 30 year
old software, buy a 30-year old CPU.
Just as an aside, in the mainframe world, we routinely ran 30 or even
40 year old software. The 30 y/o stuff runs native mode. If the 40 y/o
stuff was written for S/360, it will run native mode. If it was written
for a prior generation of hardware, it would run under emulation.
Most companies try to develop products that give the customer what he
wants. Microsoft develops products that give the customer what microsoft
wants.
We're currently porting some code written in Fortran in the early '70s
to C, mostly because IBM hasn't issued a version upgrade of Fortran on
the mainframe since some time in the '80s. It's not EOL--they'll fix
bugs if they find them and when they add new features to the hardware
they _may_ update the compiler to provide support for them.
Quadibloc
2016-02-13 16:20:42 UTC
Permalink
Post by J. Clarke
In article <630582847.477066945.696306.peter_flass-
Post by Peter Flass
Most companies try to develop products that give the customer what he
wants. Microsoft develops products that give the customer what microsoft
wants.
We're currently porting some code written in Fortran in the early '70s
to C, mostly because IBM hasn't issued a version upgrade of Fortran on
the mainframe since some time in the '80s. It's not EOL--they'll fix
bugs if they find them and when they add new features to the hardware
they _may_ update the compiler to provide support for them.
Well, that's IBM trying to serve the customers it has.

It sells System z architecture at premium prices, mainly to people who want to
use its premium database products that run the most robustly on its legacy
hardware. Its main competition is Oracle.

People wanting to do scientific computation on IBM hardware are expected to use
PowerPC hardware, which offers better price-performance. Fortran for that
hardware, I presume, is kept more up-to-date.

John Savard
h***@bbs.cpcn.com
2016-02-13 17:21:40 UTC
Permalink
Post by Quadibloc
People wanting to do scientific computation on IBM hardware are expected to use
PowerPC hardware, which offers better price-performance. Fortran for that
hardware, I presume, is kept more up-to-date.
There's a lot of stuff on the IBM website about their efforts in Power PC
Fortran.

I think big number crunchers like the weather bureau make use of it.
J. Clarke
2016-02-13 17:44:02 UTC
Permalink
Post by Quadibloc
Post by J. Clarke
In article <630582847.477066945.696306.peter_flass-
Post by Peter Flass
Most companies try to develop products that give the customer what he
wants. Microsoft develops products that give the customer what microsoft
wants.
We're currently porting some code written in Fortran in the early '70s
to C, mostly because IBM hasn't issued a version upgrade of Fortran on
the mainframe since some time in the '80s. It's not EOL--they'll fix
bugs if they find them and when they add new features to the hardware
they _may_ update the compiler to provide support for them.
Well, that's IBM trying to serve the customers it has.
The thing is they have updated their C and C++ to support 64-bit
operation, there's a new Cobol coming if it's not already out, they've
put Java on it, but they've left Fortran stuck in the '80s.
Post by Quadibloc
It sells System z architecture at premium prices, mainly to people who want to
use its premium database products that run the most robustly on its legacy
hardware. Its main competition is Oracle.
People wanting to do scientific computation on IBM hardware are expected to use
PowerPC hardware, which offers better price-performance. Fortran for that
hardware, I presume, is kept more up-to-date.
And how about people who have been doing financial computation since the
'60s on that hardware?
Quadibloc
2016-02-13 19:31:56 UTC
Permalink
Post by J. Clarke
The thing is they have updated their C and C++ to support 64-bit
operation, there's a new Cobol coming if it's not already out, they've
put Java on it, but they've left Fortran stuck in the '80s.
I agree that Fortran is more applicable to System z than C/C++, which are
basically written around an alien architecture, being designed with the PDP-11
mindset that haunts the x86 as well.

However, even IBM cannot escape the dominance of the C/C++ juggernaut, even
though that language is spectacularly ill-suited to the general programming it
is most often used for - as opposed to serving as a substitute for assembler,
which K&R C did admirably well.

John Savard
J. Clarke
2016-02-13 20:52:43 UTC
Permalink
Post by Quadibloc
Post by J. Clarke
The thing is they have updated their C and C++ to support 64-bit
operation, there's a new Cobol coming if it's not already out, they've
put Java on it, but they've left Fortran stuck in the '80s.
I agree that Fortran is more applicable to System z than C/C++, which are
basically written around an alien architecture, being designed with the PDP-11
mindset that haunts the x86 as well.
However, even IBM cannot escape the dominance of the C/C++ juggernaut, even
though that language is spectacularly ill-suited to the general programming it
is most often used for - as opposed to serving as a substitute for assembler,
which K&R C did admirably well.
Quadi, please reread the first sentence of my post, the one that says:
"they have updated their C and C++ to support 64-bit operation".
Quadibloc
2016-02-13 21:02:52 UTC
Permalink
I guess I wasn't clear what my point was. I meant that it also seemed odd to me that they would neglect Fortran when they are updating C, because I, at least, didn't view C as well suited to IBM mainframes and their operating systems.

John Savard
J. Clarke
2016-02-13 21:58:41 UTC
Permalink
Post by Quadibloc
I guess I wasn't clear what my point was. I meant that it also seemed odd to me that they would neglect Fortran when they are updating C, because I, at least, didn't view C as well suited to IBM mainframes and their operating systems.
John Savard
Oh, sorry. It does seem odd. Even odder is that they've got Java
available--I wouldn't expect that to fit in with mainframes at all.
Note that some of our people looked into using Java for the same purpose
for which we use Fortran and found it wanting, but it's been upgraded
since and the IT guys would love to get funded to give it a second go.
Quadibloc
2016-02-14 04:24:34 UTC
Permalink
Post by J. Clarke
Even odder is that they've got Java
available--I wouldn't expect that to fit in with mainframes at all.
At least I understand their excuse for that: some web sites use server-side Java.

So if IBM wants to sell System z boxes as web servers, they had better support
Java.

John Savard
h***@bbs.cpcn.com
2016-02-14 05:39:04 UTC
Permalink
Post by Quadibloc
So if IBM wants to sell System z boxes as web servers, they had better support
Java.
I don't work with it, but I'm told that a GUI front-end and classic COBOL
CICS back-end works very well.

Both IBM COBOL and CICS have all sorts modern new technology features
to them. How often they're actually used I don't know.
Peter Flass
2016-02-14 15:48:10 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Quadibloc
So if IBM wants to sell System z boxes as web servers, they had better support
Java.
I don't work with it, but I'm told that a GUI front-end and classic COBOL
CICS back-end works very well.
Both IBM COBOL and CICS have all sorts modern new technology features
to them. How often they're actually used I don't know.
My last POE was all COBOL/CICS/DB2 and, I believe, still is. I think we
were fairly typical.
--
Pete
h***@bbs.cpcn.com
2016-02-13 22:17:31 UTC
Permalink
Post by Quadibloc
I guess I wasn't clear what my point was. I meant that it also seemed odd to me that they would neglect Fortran when they are updating C, because I, at least, didn't view C as well suited to IBM mainframes and their operating systems.
Agreed. I don't understand why IBM pushes Fortran on another hardware
line and has basically abandoned it on the mainframe (though old stuff
still compiles and runs.)

However, I can understand why Fortran isn't used as much in _some_
applications. A lot of engineering work that once required Fortran is
now done in packages or with CAD/CAM. One engineer told me that even
spreadsheets are used.

Indeed, a lot of Fortran work years ago was basically using the computer
as a super-calculator that could easily be done on a spreadsheet. For
instance, they'd have sensors hooked up to a keypunch to punch out cards
representing measurements, and the cards would get read in and processed
by a formula. Presumably today a PC could do all that a lot easier.

However, for "supercomputer" applications, I _guess_ that IBM no longer
markets the Z series, but rather other products, like "Blue Blue". I find
this curious since the Z series has enhanced instructions for doing number
crunching, like several ways of floating point, 128 bit words, etc.
J. Clarke
2016-02-14 01:00:32 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Quadibloc
I guess I wasn't clear what my point was. I meant that it also seemed odd to me that they would neglect Fortran when they are updating C, because I, at least, didn't view C as well suited to IBM mainframes and their operating systems.
Agreed. I don't understand why IBM pushes Fortran on another hardware
line and has basically abandoned it on the mainframe (though old stuff
still compiles and runs.)
However, I can understand why Fortran isn't used as much in _some_
applications. A lot of engineering work that once required Fortran is
now done in packages or with CAD/CAM. One engineer told me that even
spreadsheets are used.
Indeed, a lot of Fortran work years ago was basically using the computer
as a super-calculator that could easily be done on a spreadsheet. For
instance, they'd have sensors hooked up to a keypunch to punch out cards
representing measurements, and the cards would get read in and processed
by a formula. Presumably today a PC could do all that a lot easier.
However, for "supercomputer" applications, I _guess_ that IBM no longer
markets the Z series, but rather other products, like "Blue Blue". I find
this curious since the Z series has enhanced instructions for doing number
crunching, like several ways of floating point, 128 bit words, etc.
I have wondered what a building full of Z cores could do and wondered
why IBM has never done it, but then I wondered why the PC wasn't based
on the single-chip 360 that IBM demonstrated some time before. On the
other hand, nobody ever accused MVS of being user-friendly and IBM would
have never second-sourced the architecture.
h***@bbs.cpcn.com
2016-02-14 01:52:34 UTC
Permalink
Post by J. Clarke
I have wondered what a building full of Z cores could do and wondered
why IBM has never done it, but then I wondered why the PC wasn't based
on the single-chip 360 that IBM demonstrated some time before. On the
other hand, nobody ever accused MVS of being user-friendly and IBM would
have never second-sourced the architecture.
As for the PC chip, my understanding is that back in the 8088 days,
low powered chips were still quite expensive. Also, the 8088 architecture
initially was far simpler than S/360, and the original PC DOS was far
simpler than S/360-DOS. That is, to build a chip and supporting hardware
that could do everything S/360 could do would've added too much to the
original cost. Remember, floating point required an extra chip, and
initially, PC's maxed at 640K memory. It was later on that they added
all sorts of features.

In addition, while I am weak on the internals, I _suspect_ the PC chip
was a better design for its application. Certainly, running a program
under PC-DOS and the C:> prompt was easier than coding a series of // EXEC
cards.

I don't know the internals of the "Big Blue" machines vs. Z architecture.
My _guess_ is that Z is more intended to serve many users--thousands of
CICS terminals--while Big Blue is intended to focus on heavy number
crunching. I also guess that Big Blue can do math problems faster than Z.
Anne & Lynn Wheeler
2016-02-14 03:28:08 UTC
Permalink
Post by h***@bbs.cpcn.com
I don't know the internals of the "Big Blue" machines vs. Z architecture.
My _guess_ is that Z is more intended to serve many users--thousands of
CICS terminals--while Big Blue is intended to focus on heavy number
crunching. I also guess that Big Blue can do math problems faster than Z.
z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012

z13 published refs is 30% move throughput than EC12 (or about 100MIPS)
with 40% more processors ... or about 710MIPS/proc

part of the issue is that memory latency when measured in count of
processor cycles ... is compareable to 60s disk access when measured in
count of 60s processor cycles.

earlier press is that half the per processor improvement from z10 to
z196 is introduction of features like out-of-order execution, branch
prediction, etc. that have been in other chips for decades ... aka
masking/compensation for increasing mismatch between memory latency and
processor speed. per processor z196 to ec12 has more features.

e5-2600v1 blade about concurrent with z196 ... 400-500+ BIPS (depending
on model). A e5-2600v3 blade is rated at 2.5 times a e5-2600v1 blade,
and e5-2600v4 blade is rated at 3.5 times a e5-2600v1 blade ... or over
1.5TIPS (single e5-2600v4 blade with processing power of fifteen
max. configured latest z13 mainframes?)

4341 was leading edge of distributed computing tsunami (large
corporations ordering hundreds at a time for placing out in departmental
areas) as well as datacenter 4341 clusters had much more processing
power and I/O capacity, much lower price, much less physical and
environmental footprint. At one point head of POK felt it was such a
threat to 3033, that he convinced corporate to cut allocation of
critical 4341 manufacturing component in half. Before 4341s first
shipped, I was con'ed into doing a benchmark on 4341 engineering machine
in disk product test lab (bldg. 15) for LLNL who was looking at getting
70 for compute farm (leading edge of new supercomputing and cloud
computing paradigm). some old email
http://www.garlic.com/~lynn/lhwemail.html#4341
past posts getting to play disk engineer in bldgs14&15
http://www.garlic.com/~lynn/subtopic.html#disk

in 1980, IBM STL was growing fast and had to move 300 people from the
IMS group to offsite building (with computer access back into the STL
datacenter). They looked at "remote" 3270 support ... but found the
human factors totally unacceptable. I got sucked into do channel
extension support for local channel attached 3270 controllers at the
remote building. Optimization with downloading channel programs to the
remote end for execution help eliminate enormous latency in channel
protocol chatter ... and they didn't notice between "local" 3270 channel
operation at the remote end ... and "local" 3270 channel operation in
STL. some past posts
http://www.garlic.com/~lynn/submisc.html#channel.extender

The hardware vendor tried to get IBM to release my support for the
channel extender ... but there was group in POK that objected ... they
were afraid that it would make it harder to justify getting some serial
stuff they were playing with, released.

In 1988, I'm asked to help LLNL get some serial stuff they had
standardized, which quickly morphs into fibre-channel standard
... including lots of stuff to minimize round-trip protocol chatter
latency.

Then the POK engineers (from 1980) finally get their stuff released as
ESCON with ES/9000 when it is already obsolete. some past posts
http://www.garlic.com/~lynn/submisc.html#escon

Later some POK engineers get involved in fibre-channel standard and
define a heavy-weight protocol that drastically reduces native
throughput, which is finally released as FICON. IBM publishes a "peak
i/o" benchmark for z196 that uses 104 FICON (over 104 fibre-channel)
getting 2M IOPS. About the same time, there is a fibre-channel announced
for e5-2600v1 blade claiming over 1M IOPS (two such fibre-channel have
higher throughput than 104 FICON running over 104 fibre-channel). some
past posts
http://www.garlic.com/~lynn/submisc.html#ficon

old posts referencing jan1992 meeting in Ellison's conference
room on (commercial/DBMS) cluster scaleup
http://www.garlic.com/~lynn/95.html#13

also was working with national labs (including LLNL) on cluster scaleup
for numeric intensive and filesystems ... some old email
http://www.garlic.com/~lynn/lhwemail.html#medusa

within a month of the ellison meeting, cluster scaleup is transferred,
we are told we can't work on anything with more than four processors and
it is announced as supercomputer, 17Feb1992 article announcement for
scientific and technical "only"
http://www.garlic.com/~lynn/2001n.html#6000clusters1
11May1992 article that national lab interest in cluster scaleup caught
company by "surprise" (modulo going back to 1979 and 4341 computer
farm/cluster)
http://www.garlic.com/~lynn/2001n.html#6000clusters2

recent posts mention e6-2600:
http://www.garlic.com/~lynn/2015.html#35 [CM] IBM releases Z13 Mainframe - looks like Batman
http://www.garlic.com/~lynn/2015.html#36 [CM] IBM releases Z13 Mainframe - looks like Batman
http://www.garlic.com/~lynn/2015.html#39 [CM] IBM releases Z13 Mainframe - looks like Batman
http://www.garlic.com/~lynn/2015.html#46 Why on Earth Is IBM Still Making Mainframes?
http://www.garlic.com/~lynn/2015.html#78 Is there an Inventory of the Inalled Mainframe Systems Worldwide
http://www.garlic.com/~lynn/2015.html#82 Is there an Inventory of the Installed Mainframe Systems Worldwide
http://www.garlic.com/~lynn/2015c.html#29 IBM Z13
http://www.garlic.com/~lynn/2015c.html#30 IBM Z13
http://www.garlic.com/~lynn/2015c.html#93 HONE Shutdown
http://www.garlic.com/~lynn/2015d.html#39 Remember 3277?
http://www.garlic.com/~lynn/2015e.html#14 Clone Controllers and Channel Extenders
http://www.garlic.com/~lynn/2015f.html#0 What are some of your thoughts on future of mainframe in terms of Big Data?
http://www.garlic.com/~lynn/2015f.html#5 Can you have a robust IT system that needs experts to run it?
http://www.garlic.com/~lynn/2015f.html#35 Moving to the Cloud
http://www.garlic.com/~lynn/2015f.html#93 Miniskirts and mainframes
http://www.garlic.com/~lynn/2015g.html#19 Linux Foundation Launches Open Mainframe Project
http://www.garlic.com/~lynn/2015g.html#42 20 Things Incoming College Freshmen Will Never Understand
http://www.garlic.com/~lynn/2015g.html#93 HP being sued, not by IBM.....yet!
http://www.garlic.com/~lynn/2015g.html#96 TCP joke
http://www.garlic.com/~lynn/2015h.html#2 More "ageing mainframe" (bad) press
http://www.garlic.com/~lynn/2015h.html#108 25 Years: How the Web began
http://www.garlic.com/~lynn/2015h.html#110 Is there a source for detailed, instruction-level performance info?
http://www.garlic.com/~lynn/2015h.html#114 Between CISC and RISC
http://www.garlic.com/~lynn/2016.html#15 Dilbert ... oh, you must work for IBM
http://www.garlic.com/~lynn/2016.html#19 Fibre Chanel Vs FICON
http://www.garlic.com/~lynn/2016b.html#23 IBM's 3033; "The Big One": IBM's 3033
--
virtualization experience starting Jan1968, online at home since Mar1970
h***@bbs.cpcn.com
2016-02-14 05:32:07 UTC
Permalink
Post by Anne & Lynn Wheeler
z13 published refs is 30% move throughput than EC12 (or about 100MIPS)
with 40% more processors ... or about 710MIPS/proc
1976: COBOL compile 10 minutes wall clock, S/360-40, sole use of machine.
2016: COBOL compile 30 seconds wall clock, Z13, lots of other users.

I have no idea of the significance of the above, if any, but that's
my story and I'm sticking to it.

But it would be neat to somehow go back in time to '76 and run some
benchmark programs on the 360-40, then run them on the Z today and
compare the results.

Presumably a 3390 disk drive is faster than a 2314, and caching helps.

When I changed jobs, one huge difference in speed was from the 2415
"bargain" tape drives to the 6250 bpi tape drives. I think the 6250 bpi
drive could read a tape faster than the 2415 could rewind the tape.
Those suckers flew!
David Wade
2016-02-14 14:59:49 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by J. Clarke
I have wondered what a building full of Z cores could do and wondered
why IBM has never done it, but then I wondered why the PC wasn't based
on the single-chip 360 that IBM demonstrated some time before. On the
other hand, nobody ever accused MVS of being user-friendly and IBM would
have never second-sourced the architecture.
As for the PC chip, my understanding is that back in the 8088 days,
low powered chips were still quite expensive. Also, the 8088 architecture
initially was far simpler than S/360, and the original PC DOS was far
simpler than S/360-DOS. That is, to build a chip and supporting hardware
that could do everything S/360 could do would've added too much to the
original cost. Remember, floating point required an extra chip, and
initially, PC's maxed at 640K memory. It was later on that they added
all sorts of features.
In addition, while I am weak on the internals, I _suspect_ the PC chip
was a better design for its application. Certainly, running a program
under PC-DOS and the C:> prompt was easier than coding a series of // EXEC
cards.
I don't know the internals of the "Big Blue" machines vs. Z architecture.
My _guess_ is that Z is more intended to serve many users--thousands of
CICS terminals--while Big Blue is intended to focus on heavy number
crunching. I also guess that Big Blue can do math problems faster than Z.
IBM already had CMS which uses human understandable commands like
"COPYFILE" "ERASE" and is pretty similar to MS-DOS in many ways. However
don't think even that doesn't well in 256K memory.

Dave
Peter Flass
2016-02-14 15:48:11 UTC
Permalink
Post by David Wade
Post by h***@bbs.cpcn.com
Post by J. Clarke
I have wondered what a building full of Z cores could do and wondered
why IBM has never done it, but then I wondered why the PC wasn't based
on the single-chip 360 that IBM demonstrated some time before. On the
other hand, nobody ever accused MVS of being user-friendly and IBM would
have never second-sourced the architecture.
As for the PC chip, my understanding is that back in the 8088 days,
low powered chips were still quite expensive. Also, the 8088 architecture
initially was far simpler than S/360, and the original PC DOS was far
simpler than S/360-DOS. That is, to build a chip and supporting hardware
that could do everything S/360 could do would've added too much to the
original cost. Remember, floating point required an extra chip, and
initially, PC's maxed at 640K memory. It was later on that they added
all sorts of features.
In addition, while I am weak on the internals, I _suspect_ the PC chip
was a better design for its application. Certainly, running a program
under PC-DOS and the C:> prompt was easier than coding a series of // EXEC
cards.
I don't know the internals of the "Big Blue" machines vs. Z architecture.
My _guess_ is that Z is more intended to serve many users--thousands of
CICS terminals--while Big Blue is intended to focus on heavy number
crunching. I also guess that Big Blue can do math problems faster than Z.
IBM already had CMS which uses human understandable commands like
"COPYFILE" "ERASE" and is pretty similar to MS-DOS in many ways. However
don't think even that doesn't well in 256K memory.
Dave
VM needs a LOT less memory than MVS. My guess is that you could run a
reasonable system in 64K (Lynn?) I know that on the same size machine VM
ran much better than MVS/TSO.
--
Pete
Anne & Lynn Wheeler
2016-02-14 18:15:14 UTC
Permalink
Post by Peter Flass
VM needs a LOT less memory than MVS. My guess is that you could run a
reasonable system in 64K (Lynn?) I know that on the same size machine VM
ran much better than MVS/TSO.
Part of CMS issue was that it got increasingly bloated over the years
... and was also dependent on mainstream OS/360(MVS) applications ported
to CMS. Endicott did XT/370 ... a couple 68k processors emulating some
part of 370 running modified vm370 (with CMS) ... I/O was done by
interprocessor messages with cp88 running on the 8088 processor ... it
initially only had 384k "real memory". I did some number of "benchmarks"
show extreme page thrashing ... because of the bloated memory
requirements for compilers and assemblers. As a result they kludged an
extra 128k on the memory card ... giving 512kbytes ... which helped to
reduce the worst of the page thrashing.

However, the 370 applications still tended to be relatively disk
instensive (even w/o page thrashing) ... running on xt/370 compared
poorly with applications implemented for the pc/xt environment. A CMS
disk i/o required interprocessor message to cp88 which then did the i/o
on the native XT 100ms/access hard drive. Applications native for PC/XT
optimized for much less disk i/o per operation.

MVS/TSO was significantly more bloated than VM370/CMS, much longer
pathlengths, much less efficient system algorithms. CMS had some number
of much more efficient native applications like editor ... but the
compilers and assemblers were just ported to CMS from os360/MVS.

CMS (& CP67 & vm370) I/O was much more efficient than OS360/MVS (but
applications mainframe disks were much faster than PC disks ... so
applications weren't as sensitive to optimization).

Late 70s, IBM San Jose Research had environment with MVS 370/168 and
VM370 370/158 with all 3330 disk strings connected to both machines
... but strict rules that MVS 3330 disks would never be mounted on vm370
"strings". One day they accidentally violated the rule and the
datacenter almost got phone calls from users complaining about CMS
response horribly degraded. The issue is os360/MVS environment makes
heavy use of multi-track search ... which can tie-up channel, controller
and disks for up to 1/3sec at a time. MVS multi-track search on VM370
string will lockout the vm370 "dedicated" controller ... having
disastrous effects on CMS response. We immediately demanded that the MVS
3330 disk be moved to MVS string ... and MVS operations said they would
do it 2nd shift (this was around 10am).

We had enormously optimized VS1 for running under vm370 ... that ran
faster under loaded vm370 on 370/158 than MVS ran on 370/168. We put its
3330 up on MVS string and were able to bring the MVS system to its knees
with multi-track searches (and CMS response almost returns to
normal). MVS operations then agreed to immediately move the MVS 3330, if
we moved the VS1 3330.

Besides all the significant performance issues with MVS/TSO ... a
significant part of TSO response is significantly hurt by the underlying
MVS use of multi-track search. Some past posts
http://www.garlic.com/~lynn/submain.html#dasd
--
virtualization experience starting Jan1968, online at home since Mar1970
Anne & Lynn Wheeler
2016-02-14 17:48:05 UTC
Permalink
Post by David Wade
IBM already had CMS which uses human understandable commands like
"COPYFILE" "ERASE" and is pretty similar to MS-DOS in many
ways. However don't think even that doesn't well in 256K memory.
some of the CTSS people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System

went to the 5th flr and did Multics (some bell labs people were also
involved, but then went back home a did unix ... billed as a simplified
multics), others went to the science center on the 4th flr and did
cp40/cms (on modified 256kbyte 360/40 with virtual memory hardware added), and
then cp67/cms (when standard virtual memory 360/67 became available),
GML, and bunch of other stuff. past posts
http://www.garlic.com/~lynn/subtopic.html#545tech

CP67/cms eventually morphs into vm370/cms (changing name of cambridge
monitor system to conversational monitor system). for quite some time,
default virtual memory size for cms was 256kbyte ... and originally cms
would run on 256kbyte real 360 (w/o cp40 or cp67).

before ms/dos
http://en.wikipedia.org/wiki/MS-DOS
there was seattle computer
http://en.wikipedia.org/wiki/Seattle_Computer_Products
before seattle computer there was cp/m,
http://en.wikipedia.org/wiki/CP/M
before doing cp/m, kildall worked with cp67/cms (precursor to vm370) at npg
http://en.wikipedia.org/wiki/Naval_Postgraduate_School

some cp67/cms ran on 256kbyte 360/67, but more typically 512kbyte or
768kbyte. cp67 kernel was getting a little bloated ... and i did
modifications the summer of 1969 to make part of the cp67 kernel
pageable ... reducing fixed real storage requirements, making it more
efficient on smaller real memory machines (but never shipped in the
standard cp67/cms to customers). Morph to vm370/cms simplified a bunch
of cp67/cms and dropped many of the things I had done in the 60s ... but
did pick up the pageable kernel changes ... however, other things
bloated, so that it was only officially approved for minimum 512kbyte
machines. For some 370/125 customers, I did do some of the additional
cp67/cms pageable kernel changes that made it run better in 256kbyte
machine.

Les sent me paper copy of his cp40 presentation at 1982 SEAS that
I scanned and converted to text
http://www.garlic.com/~lynn/cp40seas1982.txt

some wiki history with lots of refs:
https://en.wikipedia.org/wiki/History_of_CP/CMS
This is a little confused since IDC formed after NCSS by several people
from MIT Lincoln Labs (which had been the 2nd cp67/cms installation after
the science center, univ. I was at in the 60s was the 3rd)
https://en.wikipedia.org/wiki/History_of_CP/CMS#1964.3F.E2.80.9372.3F:_IDC.27s_use_of_CP.2FCMS

this references that I continued working on CP67 and then VM370 all
during the Future System period (even ridiculing FS activity) and had
two co-op students from BU helping.
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

After FS imploded
http://www.garlic.com/~lynn/submain.html#futuresys

and mad rush to get stuff back into 370 product pipeline contributed to
decision to pick up a lot of the stuff that I had been doing and release
in standard product ... except for CMS paged mapped filesystem some of
the more complex stuff moving managing virtual memory, including moving
additions vm370 data structures structures to paging store. One of the
BU students graduates and joins IDC where he re-implements the page
mapped filesystem, virtual memory management and moving VM370 kernel
data structures to paging store. He also did added single system image
cluster support and the ability to migrate running CMS users between
loosely-coupled systems in cluster complex. This was in the days when
IBM scheduled maintenance required taking systems offline ... migrating
running users made it possibly to have 7x24 operations and
non-disruptive take systems offline for maintenance.
http://www.garlic.com/~lynn/submain.html#online

both NCSS and IDC had quickly moved up value chain offering financial
services. I've commented that IDC was briefly mentioned in Jan2009 when
there was still the fiction that TARP funds were to buy "too big to
fail" off-book toxic assets and IDC would help value those assets ...
however, only $700B had been appropriated for TARP and just the four
largest "too big to fail" were still carrying $5.2T at the end of 2008
(wasn't enough to buy at face value, almost enough to buy at the going
rate of 22cents on the dollar, but then all those institutions would
have to be declared insolvent and be liquidated).
http://www.garlic.com/~lynn/submisc.html#too-big-to-fail
http://www.garlic.com/~lynn/submisc.html#toxic.cdo

some of the stuff mentioned done by NCSS for VP/CSS, I had already done
for CP67/CMS as undergraduate at the univ.
https://en.wikipedia.org/wiki/History_of_CP/CMS#1968.E2.80.9386.3F:_VP.2FCSS

Summer of 1968, the science center gives a week cp67/cms class in
Century City (california) which the univ. sends me to. Primary person to
give CP67 classes gave notice the friday before that he was leaving to
join NCSS. When I arrive on Sunday, science center asks me to give part
of the cp67 class.

another early virtual machine based online commercial service was
Tymshare
https://en.wikipedia.org/wiki/Tymshare

in Aug1976, they make their CMS-based online computer conferencing
systems free to (ibm user group) SHARE as VMSHARE ... archives
http://vm.marist.edu/~vmshare
--
virtualization experience starting Jan1968, online at home since Mar1970
Anne & Lynn Wheeler
2016-02-14 20:01:41 UTC
Permalink
from
http://www.garlic.com/~lynn/2007u.html#18 Folklore references to CP67 at Lincoln Labs

from Melinda's vm370 history
http://www.leeandmelindavarian.com/Melinda/

footnote on 360/67 SLT instruction

"The 360/67 SLT instruction RPQ was designed at Lincoln by Jack
Nolan. He was interested in using it for database list processing. Once
it was implemented, IBM found use for it to process lists in the CP
nucleus. I don't know if it was ever used by TSS or for any applications
program." (J.M. Winett, private communication, 1990.)

... snip ...

footnotes on two cp67 commercial timesharing companies (Arnow was
director of computing at Lincoln):

Almost immediately after that, two "spinoff" companies were formed by
former employees of Lincoln Lab, Union Carbide, and the IBM Cambridge
Scientific Center, to provide commercial services based on CP/CMS. Dick
Bayles, Mike Field, Hal Feinleib, and Bob Jay went to the company that
became National CSS.

Harit Nanavati, Bob Seawright, Jack Arnow, Frank Belvin, and Jim March
went to IDC (Interactive Data Corporation). Although the loss of so many
talented people was a blow, the CSC people felt that the success of the
two new companies greatly increased the credibility of CP-67

... snip ...

Bob Seawright was from Union Carbide and his wife was IBM SE on the
account, they both are assigned to Cambridge Science Center for
CP67/CMS. Bob does a customized version os/360 for running in cp67
virtual machine ... somewhat CMS'ized with some cms-style commands and
interactions at the os/360 "operetor's console".

Disk Balyes, Mike Field, and Harit Nanavati were at science center.

science center posts
http://www.garlic.com/~lynn/subtopic.html#545tech

cms originally could run on real 256kbyte 360/40 machine w/o cp/40 or
cp/67.
--
virtualization experience starting Jan1968, online at home since Mar1970
Peter Flass
2016-02-14 15:48:07 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by J. Clarke
I have wondered what a building full of Z cores could do and wondered
why IBM has never done it, but then I wondered why the PC wasn't based
on the single-chip 360 that IBM demonstrated some time before. On the
other hand, nobody ever accused MVS of being user-friendly and IBM would
have never second-sourced the architecture.
As for the PC chip, my understanding is that back in the 8088 days,
low powered chips were still quite expensive. Also, the 8088 architecture
initially was far simpler than S/360, and the original PC DOS was far
simpler than S/360-DOS. That is, to build a chip and supporting hardware
that could do everything S/360 could do would've added too much to the
original cost. Remember, floating point required an extra chip, and
initially, PC's maxed at 640K memory. It was later on that they added
all sorts of features.
Microvax was two chips, and the Inter 432 was three.
Post by h***@bbs.cpcn.com
In addition, while I am weak on the internals, I _suspect_ the PC chip
was a better design for its application. Certainly, running a program
under PC-DOS and the C:> prompt was easier than coding a series of // EXEC
cards.
This is confusing the hardware and the software. Even IBM never tried to
run MVS on the XT/370. I still think VM wasn't the right OS for the
general market, but IBM was aiming at developers. There were other OSs for
360/370. I've never used MTS, but I believe it was a simpler system to
use, or something could have been hacked up cheaply on the base if VM with
a better UI. That being said I think 360/370 architecture was probably
more complex than the market was looking for, although Intel was too simple
until the 486.
Post by h***@bbs.cpcn.com
I don't know the internals of the "Big Blue" machines vs. Z architecture.
My _guess_ is that Z is more intended to serve many users--thousands of
CICS terminals--while Big Blue is intended to focus on heavy number
crunching. I also guess that Big Blue can do math problems faster than Z.
--
Pete
Quadibloc
2016-02-14 04:35:18 UTC
Permalink
Post by h***@bbs.cpcn.com
However, for "supercomputer" applications, I _guess_ that IBM no longer
markets the Z series, but rather other products, like "Blue Blue". I find
this curious since the Z series has enhanced instructions for doing number
crunching, like several ways of floating point, 128 bit words, etc.
Well, its pricing - aimed at database users who are willing to pay the premium
- makes it unreasonable.

Decimal floating point of the type the System z has is also provided by the
POWER 8, if I remember correctly, and even x86 will let you do 128-bit floating-
point. The only thing System z has to itself is HFP, the traditional 360
format, and it's basically what *not* to use for number-crunching.

Of course, that makes it ideal for taking the same program, running it with HFP
instead of IEEE 478 ("BFP" in System z documentation), and if it gives a wildly
different answer, one might suspect the IEEE 478 answer is wrong too. I mean,
it would be nice if people knew how to do numerical analysis, but I don't think
I dare hold my breath.

John Savard
Dan Espen
2016-02-14 15:15:52 UTC
Permalink
Post by Quadibloc
I guess I wasn't clear what my point was. I meant that it also seemed
odd to me that they would neglect Fortran when they are updating C,
because I, at least, didn't view C as well suited to IBM mainframes
and their operating systems.
John Savard
Having worked on mainframe projects with tons of C code,
I don't see much of a problem with C on a mainframe.

You can even deal with packed decimal, but it doesn't seem
worth the effort.
--
Dan Espen
Quadibloc
2016-02-14 16:20:44 UTC
Permalink
Post by Dan Espen
Having worked on mainframe projects with tons of C code,
I don't see much of a problem with C on a mainframe.
You can even deal with packed decimal, but it doesn't seem
worth the effort.
What I'm thinking of is that C's normal I/O library doesn't mesh well with the
way IBM mainframes normally handle disk files - records aren't delimited, they
have length indication instead.

It's nothing that can't be easily taken care of - but then code for the IBM is
not compatible with code for other machines.

John Savard
h***@bbs.cpcn.com
2016-02-13 22:04:53 UTC
Permalink
Post by Quadibloc
However, even IBM cannot escape the dominance of the C/C++ juggernaut, even
though that language is spectacularly ill-suited to the general programming it
is most often used for - as opposed to serving as a substitute for assembler,
which K&R C did admirably well.
I don't know C. If it used as an assembler language, how does it handle
machine specific situations such as word vs. character architectures, and
different instruction sets*? Or is a C program still 'compiled' into a
native low level language for the machine it is to run on.

(I _think_ some folks here said C has replaced assembler in mainframe
airline reservation systems.)


*For example, the Z mainframe has a SQRT instruction now. Does the x86
have one?
Morten Reistad
2016-02-13 22:18:31 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Quadibloc
However, even IBM cannot escape the dominance of the C/C++ juggernaut, even
though that language is spectacularly ill-suited to the general programming it
is most often used for - as opposed to serving as a substitute for assembler,
which K&R C did admirably well.
I don't know C. If it used as an assembler language, how does it handle
machine specific situations such as word vs. character architectures, and
different instruction sets*? Or is a C program still 'compiled' into a
native low level language for the machine it is to run on.
C is a reasonably standard procedural language; but it has all the trimmings
needed to generate all the arithmetic and logical operations easily, and
is pretty "close to the metal" in the logic.

A loop could be

for (i=0;i<100;i++) foo[i] = i*i;

to make an array foo contain the square of the index in each element.

This would be compiled to some pretty easily mapped intermediate representation, and then
the ~100 optimisation techniques in the normal compiler would be applied to this code,
sometimes resulting in the eventually emitted assembler would be pretty far removed
from the original code. And sometimes not.

-O0 -g3 -ggdb is the incarnation if you would like to turn the optimisation off,
emit all the debug information and hooks to the debugger,

-O6 -g0 or -Os -g0 would be max optimised for speed or size and no debug output.
Post by h***@bbs.cpcn.com
(I _think_ some folks here said C has replaced assembler in mainframe
airline reservation systems.)
*For example, the Z mainframe has a SQRT instruction now. Does the x86
have one?
Not that I know of, nor that I care much. This is where C isolate you from the metal.

I invoke

result = sqrt(argument) ;

and let the compiler, linker, loader and libraries sort it out.

-- mrr

Who use arm a lot more than x86, but who is sometimes confused about what architecture
i am actually running on, even when compiling.
Osmium
2016-02-13 23:25:27 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Quadibloc
However, even IBM cannot escape the dominance of the C/C++ juggernaut, even
though that language is spectacularly ill-suited to the general programming it
is most often used for - as opposed to serving as a substitute for assembler,
which K&R C did admirably well.
I don't know C. If it used as an assembler language, how does it handle
machine specific situations such as word vs. character architectures, and
different instruction sets*? Or is a C program still 'compiled' into a
native low level language for the machine it is to run on.
(I _think_ some folks here said C has replaced assembler in mainframe
airline reservation systems.)
You are being too literal. The mention of assembly language is used to
indicate C is "close to the metal". It has bit fiddling and shifts. But it
doesn't have arithmetic flags, such as overflow. Addressing assumes the
world is made up of characters which have at least 8 bits. Any other data
entity is created by compiler magic.

Speaking of literal. The Attorney General of the US said, a couple days
ago, that the powers that be in Ferguson, MO were literally breaking the
backs of their black citizens with the usage of fines. I suppose she has
some college so it must be OK to talk like that.
J. Clarke
2016-02-14 00:42:45 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Quadibloc
However, even IBM cannot escape the dominance of the C/C++ juggernaut, even
though that language is spectacularly ill-suited to the general programming it
is most often used for - as opposed to serving as a substitute for assembler,
which K&R C did admirably well.
I don't know C. If it used as an assembler language, how does it handle
machine specific situations such as word vs. character architectures, and
different instruction sets*? Or is a C program still 'compiled' into a
native low level language for the machine it is to run on.
(I _think_ some folks here said C has replaced assembler in mainframe
airline reservation systems.)
*For example, the Z mainframe has a SQRT instruction now. Does the x86
have one?
C compiles to assembler or machine code.

X86 has had SQRT since the '486. It had it before that but only in an
add-on chip.
Quadibloc
2016-02-14 04:30:21 UTC
Permalink
Post by J. Clarke
Post by h***@bbs.cpcn.com
*For example, the Z mainframe has a SQRT instruction now. Does the x86
have one?
X86 has had SQRT since the '486. It had it before that but only in an
add-on chip.
Yes, pretty well _all_ microprocessor chips with hardware floating-point these
days even have instructions for LOG, SIN, and COS. In the case of SQRT, it's
almost mandated by the IEEE 784 spec, since an algorithm exists for SQRT that
produces the exact most accurate floating-point approximation to the result -
just as is the case for the four arithmetic operations.

This seems strange to a fossil like myself, because no one particularly felt a
need to have hardware trig functions and the like even on big mainframe
computers back in the old days. But today it's practically _de rigeur_.

Presumably it's because techniques such as the CORDIC algorithm allow hardware
to do a much better job than a polynomial approximation.

John Savard
h***@bbs.cpcn.com
2016-02-14 05:49:41 UTC
Permalink
Post by Quadibloc
This seems strange to a fossil like myself, because no one particularly felt a
need to have hardware trig functions and the like even on big mainframe
computers back in the old days. But today it's practically _de rigeur_.
In reading the IBM history, every new design was subject to debate and
eventual tradeoffs for features/speed vs. cost. I suspect that adding
the hardware to do trig functions on the 709x series simply would've cost
too much to justify the higher selling price and computing time saved.

As a reminder, some low-end machines back then didn't even have divide
hardware, division was by in software. I was surprised when I learned
that, but again, the cost of extra circuits likely didn't warrant the
hardware.

I do think some of the features in COBOL-for-MVS should've been offered
about ten years earlier than it was. It would've helped speed development.

As an aside, on bitsavers, I think in the 1401 section, is a detailed
logic layout for the CPU--all the little AND, OR, and NOT gates. It is
absolutely amazing that people could design this incredible complex stuff
that would work accurately all the time; then translate all the logic plans
into actual physical circuit cards. They made 10,000 of them.
Quadibloc
2016-02-14 14:36:06 UTC
Permalink
Post by h***@bbs.cpcn.com
As a reminder, some low-end machines back then didn't even have divide
hardware, division was by in software.
Well, on many low-end machines, multiplication was done by software, although in
many such cases one could get an optional add-on to do both multiplication and
division in hardware.

The Extended Arithmetic Element of the PDP-8 is a typical example.

John Savard
Peter Flass
2016-02-14 15:48:11 UTC
Permalink
Post by Quadibloc
Post by h***@bbs.cpcn.com
As a reminder, some low-end machines back then didn't even have divide
hardware, division was by in software.
Well, on many low-end machines, multiplication was done by software, although in
many such cases one could get an optional add-on to do both multiplication and
division in hardware.
The Extended Arithmetic Element of the PDP-8 is a typical example.
Many machines had software floating-point, such as the SDS940. I think
one model in the series had hardware FP, but that was an exception.
Post by Quadibloc
John Savard
--
Pete
Quadibloc
2016-02-14 16:23:15 UTC
Permalink
Post by Peter Flass
Many machines had software floating-point, such as the SDS940. I think
one model in the series had hardware FP, but that was an exception.
Well, there was the 9300, which was related, but which wasn't really in the
series.

John Savard
Quadibloc
2016-02-14 16:27:54 UTC
Permalink
Post by Quadibloc
Post by Peter Flass
Many machines had software floating-point, such as the SDS940. I think
one model in the series had hardware FP, but that was an exception.
Well, there was the 9300, which was related, but which wasn't really in the
series.
I just checked. Hardware floating-point was available for the 9300 as an option.

The 930 (and thus the 940) didn't have such an option available.

John Savard
Peter Flass
2016-02-14 15:48:09 UTC
Permalink
Post by Quadibloc
Post by J. Clarke
Post by h***@bbs.cpcn.com
*For example, the Z mainframe has a SQRT instruction now. Does the x86
have one?
X86 has had SQRT since the '486. It had it before that but only in an
add-on chip.
Yes, pretty well _all_ microprocessor chips with hardware floating-point these
days even have instructions for LOG, SIN, and COS. In the case of SQRT, it's
almost mandated by the IEEE 784 spec, since an algorithm exists for SQRT that
produces the exact most accurate floating-point approximation to the result -
just as is the case for the four arithmetic operations.
This seems strange to a fossil like myself, because no one particularly felt a
need to have hardware trig functions and the like even on big mainframe
computers back in the old days. But today it's practically _de rigeur_.
I never did the scientific stuff, but no place I ever worked would have
used them. Remember, floating-point was an option on the 360/30, and I
saw few shops that had that. Hardware was expensive and constrained back
then.
Post by Quadibloc
Presumably it's because techniques such as the CORDIC algorithm allow hardware
to do a much better job than a polynomial approximation.
John Savard
--
Pete
Morten Reistad
2016-02-13 16:25:33 UTC
Permalink
Post by Quadibloc
Post by J. Clarke
In article <630582847.477066945.696306.peter_flass-
Post by Peter Flass
Most companies try to develop products that give the customer what he
wants. Microsoft develops products that give the customer what microsoft
wants.
We're currently porting some code written in Fortran in the early '70s
to C, mostly because IBM hasn't issued a version upgrade of Fortran on
the mainframe since some time in the '80s. It's not EOL--they'll fix
bugs if they find them and when they add new features to the hardware
they _may_ update the compiler to provide support for them.
Well, that's IBM trying to serve the customers it has.
It sells System z architecture at premium prices, mainly to people who want to
use its premium database products that run the most robustly on its legacy
hardware. Its main competition is Oracle.
People wanting to do scientific computation on IBM hardware are expected to use
PowerPC hardware, which offers better price-performance. Fortran for that
hardware, I presume, is kept more up-to-date.
And you can run a pretty full-featured Linux on System z. Fortran included.

-- mrr
Quadibloc
2016-02-13 19:34:10 UTC
Permalink
Post by Morten Reistad
And you can run a pretty full-featured Linux on System z. Fortran included.
Yes. What it won't do, though, is generate code that runs under z/OS.

If you want to run Linux, you have many way cheaper alternatives than a System z.

John Savard
J. Clarke
2016-02-13 21:47:18 UTC
Permalink
Post by Morten Reistad
Post by Quadibloc
Post by J. Clarke
In article <630582847.477066945.696306.peter_flass-
Post by Peter Flass
Most companies try to develop products that give the customer what he
wants. Microsoft develops products that give the customer what microsoft
wants.
We're currently porting some code written in Fortran in the early '70s
to C, mostly because IBM hasn't issued a version upgrade of Fortran on
the mainframe since some time in the '80s. It's not EOL--they'll fix
bugs if they find them and when they add new features to the hardware
they _may_ update the compiler to provide support for them.
Well, that's IBM trying to serve the customers it has.
It sells System z architecture at premium prices, mainly to people who want to
use its premium database products that run the most robustly on its legacy
hardware. Its main competition is Oracle.
People wanting to do scientific computation on IBM hardware are expected to use
PowerPC hardware, which offers better price-performance. Fortran for that
hardware, I presume, is kept more up-to-date.
And you can run a pretty full-featured Linux on System z. Fortran included.
We've had people look at that. The basic problem is that the Linux and
Z/OS calling conventions are different and a wrapper has to be written
in assembler to translate them. It would be nice if IBM added a switch
to GFORTRAN that says "emit Z/OS native code" but as far as I know they
haven't.

Now, one could go off on a long tangent about such products as Tachyon
Workbench, but experimenting with those takes time and money that nobody
is willing to spend without there being a compelling business case for
them. When there's a perfectly satisfactory, fully supported, native C
compiler that business case gets very difficult to make.
Andrew Swallow
2016-02-14 04:09:45 UTC
Permalink
On 13/02/2016 16:20, Quadibloc wrote:
{snip}
Post by Quadibloc
People wanting to do scientific computation on IBM hardware are expected to use
PowerPC hardware, which offers better price-performance. Fortran for that
hardware, I presume, is kept more up-to-date.
John Savard
Hardware is so cheap and fast these days it may be easier to find a
compatible Fortran complier and buy the hardware that goes with it.
Peter Flass
2016-02-14 15:48:08 UTC
Permalink
Post by Andrew Swallow
{snip}
Post by Quadibloc
People wanting to do scientific computation on IBM hardware are expected to use
PowerPC hardware, which offers better price-performance. Fortran for that
hardware, I presume, is kept more up-to-date.
John Savard
Hardware is so cheap and fast these days it may be easier to find a
compatible Fortran complier and buy the hardware that goes with it.
One thing to look into. I think gcc now runs on zOS and generates native
code. Does this also apply to gfortran, which AFAIK is just a front-end to
the same code generator. Is this a better fortran than IBM's?
--
Pete
J. Clarke
2016-02-14 17:12:35 UTC
Permalink
In article <150783820.477156402.082855.peter_flass-
Post by Peter Flass
Post by Andrew Swallow
{snip}
Post by Quadibloc
People wanting to do scientific computation on IBM hardware are expected to use
PowerPC hardware, which offers better price-performance. Fortran for that
hardware, I presume, is kept more up-to-date.
John Savard
Hardware is so cheap and fast these days it may be easier to find a
compatible Fortran complier and buy the hardware that goes with it.
One thing to look into. I think gcc now runs on zOS and generates native
code. Does this also apply to gfortran, which AFAIK is just a front-end to
the same code generator. Is this a better fortran than IBM's?
There is a gcc that runs natively on Z/OS and emits native code. How
well does it work and play with JCL?
h***@bbs.cpcn.com
2016-02-13 17:18:28 UTC
Permalink
Post by J. Clarke
We're currently porting some code written in Fortran in the early '70s
to C, mostly because IBM hasn't issued a version upgrade of Fortran on
the mainframe since some time in the '80s. It's not EOL--they'll fix
bugs if they find them and when they add new features to the hardware
they _may_ update the compiler to provide support for them.
There is still some old Fortran that still runs on the Z mainframe. But
I think most Fortran work is now done by other means, such as CAD/CAM or spreadsheets; or mini-computers where Fortran is still supported
and expanded.

But if a conversion is necessary, today's mainframe COBOL supports
some sci/eng functions. Might be easier to go to COBOL than C.
J. Clarke
2016-02-13 18:07:51 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by J. Clarke
We're currently porting some code written in Fortran in the early '70s
to C, mostly because IBM hasn't issued a version upgrade of Fortran on
the mainframe since some time in the '80s. It's not EOL--they'll fix
bugs if they find them and when they add new features to the hardware
they _may_ update the compiler to provide support for them.
There is still some old Fortran that still runs on the Z mainframe. But
I think most Fortran work is now done by other means, such as CAD/CAM or spreadsheets; or mini-computers where Fortran is still supported
and expanded.
But if a conversion is necessary, today's mainframe COBOL supports
some sci/eng functions. Might be easier to go to COBOL than C.
For certain values of "Today's". We're looking at a Cobol upgrade. The
big question is how much it's going to break and how much it will cost
to fix whatever it breaks (cost not just in programmer time but in
pissed-off agents and customers and in lost business) and so far it's
not looking good for the upgrade.

But we are trying to avoid writing more code in languages for which
programmers are difficult to find.
h***@bbs.cpcn.com
2016-02-13 18:48:46 UTC
Permalink
Post by J. Clarke
But we are trying to avoid writing more code in languages for which
programmers are difficult to find.
I don't know about Fortran, but it should be easy to find COBOL programmers.
A ton of people in their 40s, 50s, and 60s were trained in COBOL.
J. Clarke
2016-02-13 20:49:23 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by J. Clarke
But we are trying to avoid writing more code in languages for which
programmers are difficult to find.
I don't know about Fortran, but it should be easy to find COBOL programmers.
A ton of people in their 40s, 50s, and 60s were trained in COBOL.
But how many of those have math degrees? We don't just need people who
can write code, we need people who can both write code and speak
actuary.
h***@bbs.cpcn.com
2016-02-13 22:09:29 UTC
Permalink
Post by J. Clarke
Post by h***@bbs.cpcn.com
Post by J. Clarke
But we are trying to avoid writing more code in languages for which
programmers are difficult to find.
I don't know about Fortran, but it should be easy to find COBOL programmers.
A ton of people in their 40s, 50s, and 60s were trained in COBOL.
But how many of those have math degrees? We don't just need people who
can write code, we need people who can both write code and speak
actuary.
Ah, you need specialists, not just a programmer.

(Actually, I know of an ideal candidate for you. WW II vet, went to
college on the GI Bill, then worked as an actuary, with Fortran
programming. I'll ask him the next time I see him. <g>)
J. Clarke
2016-02-14 00:56:01 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by J. Clarke
Post by h***@bbs.cpcn.com
Post by J. Clarke
But we are trying to avoid writing more code in languages for which
programmers are difficult to find.
I don't know about Fortran, but it should be easy to find COBOL programmers.
A ton of people in their 40s, 50s, and 60s were trained in COBOL.
But how many of those have math degrees? We don't just need people who
can write code, we need people who can both write code and speak
actuary.
Ah, you need specialists, not just a programmer.
(Actually, I know of an ideal candidate for you. WW II vet, went to
college on the GI Bill, then worked as an actuary, with Fortran
programming. I'll ask him the next time I see him. <g>)
He does sound ideal. We just filled the last opening in our area but
I'll see if there's anything else.
h***@bbs.cpcn.com
2016-02-14 01:43:11 UTC
Permalink
Post by J. Clarke
Post by h***@bbs.cpcn.com
(Actually, I know of an ideal candidate for you. WW II vet, went to
college on the GI Bill, then worked as an actuary, with Fortran
programming. I'll ask him the next time I see him. <g>)
He does sound ideal. We just filled the last opening in our area but
I'll see if there's anything else.
Well, he does like his current job. <g>
Andrew Swallow
2016-02-14 04:16:39 UTC
Permalink
Post by J. Clarke
Post by h***@bbs.cpcn.com
Post by J. Clarke
But we are trying to avoid writing more code in languages for which
programmers are difficult to find.
I don't know about Fortran, but it should be easy to find COBOL programmers.
A ton of people in their 40s, 50s, and 60s were trained in COBOL.
But how many of those have math degrees? We don't just need people who
can write code, we need people who can both write code and speak
actuary.
Where are the youngsters now that you employed 25 years ago?
Morten Reistad
2016-02-13 22:35:15 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by J. Clarke
But we are trying to avoid writing more code in languages for which
programmers are difficult to find.
I don't know about Fortran, but it should be easy to find COBOL programmers.
A ton of people in their 40s, 50s, and 60s were trained in COBOL.
A programmer worth the title should be able to pick up something as
reasonably straightforward as cobol, fortran, c or pascal in a few months.
At least to a working relationship with old code.

I can understand struggling with lisp or java.

-- mrr
J. Clarke
2016-02-14 01:08:13 UTC
Permalink
Post by Morten Reistad
Post by h***@bbs.cpcn.com
Post by J. Clarke
But we are trying to avoid writing more code in languages for which
programmers are difficult to find.
I don't know about Fortran, but it should be easy to find COBOL programmers.
A ton of people in their 40s, 50s, and 60s were trained in COBOL.
A programmer worth the title should be able to pick up something as
reasonably straightforward as cobol, fortran, c or pascal in a few months.
At least to a working relationship with old code.
It's not that they can't, it's that they for the most part won't. It
seems to be a kid thing--fear of being ostracized by their friends for
using such a clunky obsolete language or some such.
Post by Morten Reistad
I can understand struggling with lisp or java.
Well, that's the other shoe--when we aren't porting Fortran to C we're
setting up actuarial models in APL that are used for QC purposes (we run
the production code against the APL models to confirm that it's working
properly).
h***@bbs.cpcn.com
2016-02-14 01:55:41 UTC
Permalink
Post by J. Clarke
It's not that they can't, it's that they for the most part won't. It
seems to be a kid thing--fear of being ostracized by their friends for
using such a clunky obsolete language or some such.
Perhaps prospective candidates are worried about their future down
the road--that having experience on an obsolete language won't look
good to a subsequent employer. FWIW, _historically_, that was a prudent
attitude to have.
Post by J. Clarke
Well, that's the other shoe--when we aren't porting Fortran to C we're
setting up actuarial models in APL that are used for QC purposes (we run
the production code against the APL models to confirm that it's working
properly).
APL--well that disqualifies me! <g>
Andrew Swallow
2016-02-14 04:14:29 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by J. Clarke
But we are trying to avoid writing more code in languages for which
programmers are difficult to find.
I don't know about Fortran, but it should be easy to find COBOL programmers.
A ton of people in their 40s, 50s, and 60s were trained in COBOL.
People may have taken COBOL and Fortran off their CV but still be able
to do it.
Peter Flass
2016-02-14 15:48:09 UTC
Permalink
Post by Andrew Swallow
Post by h***@bbs.cpcn.com
Post by J. Clarke
But we are trying to avoid writing more code in languages for which
programmers are difficult to find.
I don't know about Fortran, but it should be easy to find COBOL programmers.
A ton of people in their 40s, 50s, and 60s were trained in COBOL.
People may have taken COBOL and Fortran off their CV but still be able
to do it.
At some point I stopped listing them.
--
Pete
Morten Reistad
2016-02-13 15:13:51 UTC
Permalink
Post by Peter Flass
Post by h***@bbs.cpcn.com
Post by Scott Lurndal
It wasn't a mistake, it was a smart decision. You want to run 30 year
old software, buy a 30-year old CPU.
Just as an aside, in the mainframe world, we routinely ran 30 or even
40 year old software. The 30 y/o stuff runs native mode. If the 40 y/o
stuff was written for S/360, it will run native mode. If it was written
for a prior generation of hardware, it would run under emulation.
Most companies try to develop products that give the customer what he
wants. Microsoft develops products that give the customer what microsoft
wants.
Microsoft is now another Nokia or Ericsson. Fitting with the merger.

Nokia and Ericsson were depending heavier and heavier on their reseller
networks, and when these turned into large mobile operators and became
rather few in number the power game turned. Nokia discovered that they
couldn't do a thing with their mainstream products that weren't firmly
anchored with the operators. Never mind the end users.

Their customers were around 30 in number for a 95% percentile and
made increasing demands back to fit a revenue model that fitted the operators,
not the end users.

Enter the smartphones, to challenge the operators. The operators demanded
the same from Apple and Google, only to be laughed out the door. It took
Apple and Google a few months to set up an alternate sales network. This
isn't hard when you have the hottest products.

Microsoft is getting a doze of this now. Their customers are the large
hardware vendors. They make huge demands back. The driver division in
Microsoft is a political hellhole that burns through people, but they
are rewarded with other positions after 18 months tenure there. This is
because they simply have to accommodate all the vendors weird quirks
in their hardware and keep windows stable. It is a tall order, and they
are doing a remarkable job, all considering,

But this has wider implications. Windows cannot be too big a success
on xpods or xphones without alienating this channel, which is the lifeblood
of the Microsoft revenue stream. Just as Nokia couldn't make a phone
that was smart independently of the operator networks.

So both Microsoft, Nokia and Ericsson listened to their customers, and
ignored their customers customer. Like IBM said; if you own your customers
customers you can ignore your customers.

-- mrr
Blanco
2016-02-13 17:44:42 UTC
Permalink
Post by J. Clarke
In article
Post by Peter Flass
Post by h***@bbs.cpcn.com
Post by Scott Lurndal
It wasn't a mistake, it was a smart decision. You want to run 30 year
old software, buy a 30-year old CPU.
Just as an aside, in the mainframe world, we routinely ran 30 or even
40 year old software. The 30 y/o stuff runs native mode. If the 40 y/o
stuff was written for S/360, it will run native mode. If it was written
for a prior generation of hardware, it would run under emulation.
Most companies try to develop products that give the customer what he
wants. Microsoft develops products that give the customer what microsoft
wants.
Microsoft is now another Nokia or Ericsson. Fitting with the merger.
Nokia and Ericsson were depending heavier and heavier on their reseller
networks, and when these turned into large mobile operators and became
rather few in number the power game turned. Nokia discovered that they
couldn't do a thing with their mainstream products that weren't firmly
anchored with the operators. Never mind the end users.
Their customers were around 30 in number for a 95% percentile and
made increasing demands back to fit a revenue model that fitted the operators,
not the end users.
Enter the smartphones, to challenge the operators. The operators demanded
the same from Apple and Google, only to be laughed out the door. It took
Apple and Google a few months to set up an alternate sales network. This
isn't hard when you have the hottest products.
Microsoft is getting a doze of this now. Their customers are the large
hardware vendors. They make huge demands back. The driver division in
Microsoft is a political hellhole that burns through people, but they
are rewarded with other positions after 18 months tenure there. This is
because they simply have to accommodate all the vendors weird quirks
in their hardware and keep windows stable. It is a tall order, and they
are doing a remarkable job, all considering,
But this has wider implications. Windows cannot be too big a success
on xpods or xphones without alienating this channel, which is the lifeblood
of the Microsoft revenue stream. Just as Nokia couldn't make a phone
that was smart independently of the operator networks.
That line doesn’t explain why Nokia was so successful before
Apple got involved in the mobile/cell phone market.
Post by J. Clarke
So both Microsoft, Nokia and Ericsson listened to their customers, and
ignored their customers customer. Like IBM said; if you own your customers
customers you can ignore your customers.
That was never Nokia's problem.
Morten Reistad
2016-02-13 22:30:14 UTC
Permalink
Post by J. Clarke
In article
Post by Peter Flass
Post by h***@bbs.cpcn.com
Post by Scott Lurndal
It wasn't a mistake, it was a smart decision. You want to run 30 year
old software, buy a 30-year old CPU.
Just as an aside, in the mainframe world, we routinely ran 30 or even
40 year old software. The 30 y/o stuff runs native mode. If the 40 y/o
stuff was written for S/360, it will run native mode. If it was written
for a prior generation of hardware, it would run under emulation.
Most companies try to develop products that give the customer what he
wants. Microsoft develops products that give the customer what microsoft
wants.
Microsoft is now another Nokia or Ericsson. Fitting with the merger.
Nokia and Ericsson were depending heavier and heavier on their reseller
networks, and when these turned into large mobile operators and became
rather few in number the power game turned. Nokia discovered that they
couldn't do a thing with their mainstream products that weren't firmly
anchored with the operators. Never mind the end users.
Their customers were around 30 in number for a 95% percentile and
made increasing demands back to fit a revenue model that fitted the operators,
not the end users.
Enter the smartphones, to challenge the operators. The operators demanded
the same from Apple and Google, only to be laughed out the door. It took
Apple and Google a few months to set up an alternate sales network. This
isn't hard when you have the hottest products.
Microsoft is getting a doze of this now. Their customers are the large
hardware vendors. They make huge demands back. The driver division in
Microsoft is a political hellhole that burns through people, but they
are rewarded with other positions after 18 months tenure there. This is
because they simply have to accommodate all the vendors weird quirks
in their hardware and keep windows stable. It is a tall order, and they
are doing a remarkable job, all considering,
But this has wider implications. Windows cannot be too big a success
on xpods or xphones without alienating this channel, which is the lifeblood
of the Microsoft revenue stream. Just as Nokia couldn't make a phone
that was smart independently of the operator networks.
That line doesnÂ’t explain why Nokia was so successful before
Apple got involved in the mobile/cell phone market.
They forced every application through a rigid infrastructure. Even
having the phone automatically register in a number of wifi zones
was beyond them.

This meant that the door was wide open for the competition to bash
them, once they got momentum. And bash them they did. They also
discovered how hamstrung they were once the iPhone had started to
hurt. By then it was too late.
Post by J. Clarke
So both Microsoft, Nokia and Ericsson listened to their customers, and
ignored their customers customer. Like IBM said; if you own your customers
customers you can ignore your customers.
That was never Nokia's problem.
And what was?

Old technology phones. But this explains WHY they were old tech.

-- mrr
Blanco
2016-02-14 02:27:57 UTC
Permalink
Post by Morten Reistad
Post by Blanco
Post by J. Clarke
In article
Post by Peter Flass
Post by h***@bbs.cpcn.com
Post by Scott Lurndal
It wasn't a mistake, it was a smart decision. You want to run 30 year
old software, buy a 30-year old CPU.
Just as an aside, in the mainframe world, we routinely ran 30 or even
40 year old software. The 30 y/o stuff runs native mode. If the 40 y/o
stuff was written for S/360, it will run native mode. If it was written
for a prior generation of hardware, it would run under emulation.
Most companies try to develop products that give the customer what he
wants. Microsoft develops products that give the customer what microsoft
wants.
Microsoft is now another Nokia or Ericsson. Fitting with the merger.
Nokia and Ericsson were depending heavier and heavier on their reseller
networks, and when these turned into large mobile operators and became
rather few in number the power game turned. Nokia discovered that they
couldn't do a thing with their mainstream products that weren't firmly
anchored with the operators. Never mind the end users.
Their customers were around 30 in number for a 95% percentile and
made increasing demands back to fit a revenue model that fitted the operators,
not the end users.
Enter the smartphones, to challenge the operators. The operators demanded
the same from Apple and Google, only to be laughed out the door. It took
Apple and Google a few months to set up an alternate sales network. This
isn't hard when you have the hottest products.
Microsoft is getting a doze of this now. Their customers are the large
hardware vendors. They make huge demands back. The driver division in
Microsoft is a political hellhole that burns through people, but they
are rewarded with other positions after 18 months tenure there. This is
because they simply have to accommodate all the vendors weird quirks
in their hardware and keep windows stable. It is a tall order, and they
are doing a remarkable job, all considering,
But this has wider implications. Windows cannot be too big a success
on xpods or xphones without alienating this channel, which is the lifeblood
of the Microsoft revenue stream. Just as Nokia couldn't make a phone
that was smart independently of the operator networks.
That line doesn’t explain why Nokia was so successful before
Apple got involved in the mobile/cell phone market.
They forced every application through a rigid infrastructure. Even
having the phone automatically register in a number of wifi zones
was beyond them.
That wasn’t the reason the iphone took of so spectacularly.

The UI on the iphone left nokias for dead.

The touch screen UI in spades.

It was so easy to do 3rd party apps once Apple got its act into
gear in that regard that those combined were the reason Nokia
was left in the dust and took a long time to recover on that.

And then they fucked up completely when they chose to use windows
phone instead of android as the OS for their high end phones.
Post by Morten Reistad
This meant that the door was wide open for the competition to bash
them, once they got momentum. And bash them they did. They also
discovered how hamstrung they were once the iPhone had started to
hurt. By then it was too late.
Apple showed it was never too late if you had a clue about
where the industry was headed. Same with Google.
Post by Morten Reistad
Post by Blanco
Post by J. Clarke
So both Microsoft, Nokia and Ericsson listened to their customers, and
ignored their customers customer. Like IBM said; if you own your customers
customers you can ignore your customers.
That was never Nokia's problem.
And what was?
Old technology phones.
There was a lot more involved than just that.

Same with the Blackberry.
Post by Morten Reistad
But this explains WHY they were old tech.
No.
Stephen Sprunk
2016-02-14 19:43:08 UTC
Permalink
Post by Morten Reistad
But this has wider implications. Windows cannot be too big a success
on xpods or xphones without alienating this channel, which is the
lifeblood of the Microsoft revenue stream. Just as Nokia couldn't
make a phone that was smart independently of the operator networks.
One must remember that Microsoft has only two profitable products:
Windows and Office. Everything else loses money; the only reason they
exist at all is to guarantee continued sales of Windows and/or drive
competitors (whose products may also run on non-Windows platforms) out
of business.

Smartphones, tablets and the Web in general represent an existential
threat to Microsoft because they provide customers (and competitors) a
way around Windows. Microsoft is trying to use its dominance on the
desktop to establish dominance in these spaces too, but so far, it
hasn't worked--and we may now be seeing the beginning of their end.

S
--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
William Pechter
2016-02-12 17:38:28 UTC
Permalink
Post by Quadibloc
Post by h***@bbs.cpcn.com
Post by Quadibloc
They also have a version that runs on 32-bit Windows ... but *not*
one for 64-bit
Post by h***@bbs.cpcn.com
Post by Quadibloc
Windows!
Wasn't there some sort of add-on for modern Windows to allow old DOS
applications, such as QBASIC, to run on it?
Windows comes with a virtual machine, however Microsoft in recent
releases has crippled it in enough ways to make it less useful than it
once was. It was used to support a "Virtual XP" in some versions of
Windows 7 that allowed 32-bit-only code to run, but that broke with some
16-bit code.
Can't you run WindowsXP, 2000, 98 in hyperv on Windows past version 7.

I know 8.1 had the desktop client hyper-v which seemed equivalent in function
to VMware Workstation built in.
--
--
Digital had it then. Don't you wish you could buy it now!
pechter-at-gmail.com http://xkcd.com/705/
J. Clarke
2016-02-13 00:33:05 UTC
Permalink
Post by William Pechter
Post by Quadibloc
Post by h***@bbs.cpcn.com
Post by Quadibloc
They also have a version that runs on 32-bit Windows ... but *not*
one for 64-bit
Post by h***@bbs.cpcn.com
Post by Quadibloc
Windows!
Wasn't there some sort of add-on for modern Windows to allow old DOS
applications, such as QBASIC, to run on it?
Windows comes with a virtual machine, however Microsoft in recent
releases has crippled it in enough ways to make it less useful than it
once was. It was used to support a "Virtual XP" in some versions of
Windows 7 that allowed 32-bit-only code to run, but that broke with some
16-bit code.
Can't you run WindowsXP, 2000, 98 in hyperv on Windows past version 7.
One problem is that Microsoft has limited emulation of legacy I/O
devices and you may not be able to find drivers for older operating
systems that support the I/O devices that Microsoft emulates. This
means that you may find yourself lacking a keyboard, or a mouse, or
video, or something else important, depending on what you're trying to
run.
Post by William Pechter
I know 8.1 had the desktop client hyper-v which seemed equivalent in function
to VMware Workstation built in.
It's the same VM technology that goes into the server products--
Microsoft's aiming that at running multiple Windows 7+ or Linux sessions
on a single server, not at supporting legacy systems, which limits its
utility on the desktop.
Quadibloc
2016-02-12 17:51:46 UTC
Permalink
Post by J. Clarke
Post by h***@bbs.cpcn.com
Post by Quadibloc
They also have a version that runs on 32-bit Windows ... but *not* one for 64-bit
Windows!
Wasn't there some sort of add-on for modern Windows to allow old DOS
applications, such as QBASIC, to run on it?
Windows comes with a virtual machine, however Microsoft in recent
releases has crippled it in enough ways to make it less useful than it
once was. It was used to support a "Virtual XP" in some versions of
Windows 7 that allowed 32-bit-only code to run, but that broke with some
16-bit code.
Yes; this is "Windows XP Mode", available with Windows 7 Professional. But it
is not available in Windows 8 or later.

John Savard
J. Clarke
2016-02-13 00:24:55 UTC
Permalink
Post by Quadibloc
Post by J. Clarke
Post by h***@bbs.cpcn.com
Post by Quadibloc
They also have a version that runs on 32-bit Windows ... but *not* one for 64-bit
Windows!
Wasn't there some sort of add-on for modern Windows to allow old DOS
applications, such as QBASIC, to run on it?
Windows comes with a virtual machine, however Microsoft in recent
releases has crippled it in enough ways to make it less useful than it
once was. It was used to support a "Virtual XP" in some versions of
Windows 7 that allowed 32-bit-only code to run, but that broke with some
16-bit code.
Yes; this is "Windows XP Mode", available with Windows 7 Professional. But it
is not available in Windows 8 or later.
John Savard
The virtual machine is there, assuming you have a CPU with the right
features to support it. What's not there is the XP license.
jmfbahciv
2016-02-13 14:37:13 UTC
Permalink
Post by J. Clarke
Post by Quadibloc
Post by J. Clarke
Post by h***@bbs.cpcn.com
Post by Quadibloc
They also have a version that runs on 32-bit Windows ... but *not* one for 64-bit
Windows!
Wasn't there some sort of add-on for modern Windows to allow old DOS
applications, such as QBASIC, to run on it?
Windows comes with a virtual machine, however Microsoft in recent
releases has crippled it in enough ways to make it less useful than it
once was. It was used to support a "Virtual XP" in some versions of
Windows 7 that allowed 32-bit-only code to run, but that broke with some
16-bit code.
Yes; this is "Windows XP Mode", available with Windows 7 Professional. But it
is not available in Windows 8 or later.
John Savard
The virtual machine is there, assuming you have a CPU with the right
features to support it. What's not there is the XP license.
Why should you have to run XP when you want to use XP, or earlier apps
if there is an emulator? You don't need the old hardware support
unless the app was writting specifically for a piece of hardware which
isn't on the system.

/BAH
J. Clarke
2016-02-13 15:24:51 UTC
Permalink
In article <***@aca41062.ipt.aol.com>, ***@aol.com
says...
Post by Quadibloc
Post by J. Clarke
Post by Quadibloc
Post by J. Clarke
Post by h***@bbs.cpcn.com
Post by Quadibloc
They also have a version that runs on 32-bit Windows ... but *not*
one for 64-bit
Post by J. Clarke
Post by Quadibloc
Post by J. Clarke
Post by h***@bbs.cpcn.com
Post by Quadibloc
Windows!
Wasn't there some sort of add-on for modern Windows to allow old DOS
applications, such as QBASIC, to run on it?
Windows comes with a virtual machine, however Microsoft in recent
releases has crippled it in enough ways to make it less useful than it
once was. It was used to support a "Virtual XP" in some versions of
Windows 7 that allowed 32-bit-only code to run, but that broke with some
16-bit code.
Yes; this is "Windows XP Mode", available with Windows 7 Professional. But
it
Post by J. Clarke
Post by Quadibloc
is not available in Windows 8 or later.
John Savard
The virtual machine is there, assuming you have a CPU with the right
features to support it. What's not there is the XP license.
Why should you have to run XP when you want to use XP, or earlier apps
if there is an emulator? You don't need the old hardware support
unless the app was writting specifically for a piece of hardware which
isn't on the system.
The virtual machine is not an emulator, it's a virtualization of the
CPU, with the peripherals emulated. You have to install an OS on it to
be able to do anything with it.
h***@bbs.cpcn.com
2016-02-13 07:20:14 UTC
Permalink
Post by Quadibloc
Yes; this is "Windows XP Mode", available with Windows 7 Professional. But it
is not available in Windows 8 or later.
Thanks for the various explanations.

Now: can a lay user, who buys a new PC with whatever Windows is
supplied now, do something to allow him to run old DOS applications
(not games, but applications, like old versions of Lotus, or compiled
QuickBASIC programs, or QBASIC?)

If so, what has to be done?

Thanks.
gareth
2016-02-13 11:09:52 UTC
Permalink
Post by h***@bbs.cpcn.com
Post by Quadibloc
Yes; this is "Windows XP Mode", available with Windows 7 Professional. But it
is not available in Windows 8 or later.
Thanks for the various explanations.
Now: can a lay user, who buys a new PC with whatever Windows is
supplied now, do something to allow him to run old DOS applications
(not games, but applications, like old versions of Lotus, or compiled
QuickBASIC programs, or QBASIC?)
Those of us with the right background knowledge and experience to
write an 8086 machine code emulator?
Charlie Gibbs
2016-02-14 00:53:10 UTC
Permalink
Post by gareth
Post by h***@bbs.cpcn.com
Post by Quadibloc
Yes; this is "Windows XP Mode", available with Windows 7 Professional.
But it is not available in Windows 8 or later.
Thanks for the various explanations.
Now: can a lay user, who buys a new PC with whatever Windows is
supplied now, do something to allow him to run old DOS applications
(not games, but applications, like old versions of Lotus, or compiled
QuickBASIC programs, or QBASIC?)
Those of us with the right background knowledge and experience to
write an 8086 machine code emulator?
Or those of us with a Linux-using friend who can help them buy a machine
(preferably without paying the Microsoft tax), install Linux on it, and
then set up dosemu. I just ran a test on my quad-core 64-bit box, and
a couple of QuirkBASIC programs that I last compiled in 2009 recompiled
just fine.
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!
Ahem A Rivet's Shot
2016-02-13 11:33:54 UTC
Permalink
On Fri, 12 Feb 2016 23:20:14 -0800 (PST)
Post by h***@bbs.cpcn.com
Post by Quadibloc
Yes; this is "Windows XP Mode", available with Windows 7 Professional.
But it is not available in Windows 8 or later.
Thanks for the various explanations.
Now: can a lay user, who buys a new PC with whatever Windows is
supplied now, do something to allow him to run old DOS applications
(not games, but applications, like old versions of Lotus, or compiled
QuickBASIC programs, or QBASIC?)
If so, what has to be done?
Install MSDOS (or FreeDOS) and away you go.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Loading...