View Full Version : Windows XP 64
Ritual
December 11th 07, 01:14 AM
I am building a new computer. It is a life-investment, and important.
I want more than 2G of RAM but hear bad things about XP64 and audio. I
use Cubase SX + many VST's and plugins and other apps. I don't want to
get screwed.
Is anyone here running Win XP 64 with problems? What about success?
- Rit
Scott Dorsey
December 11th 07, 01:50 AM
Ritual <Ritual> wrote:
>I am building a new computer. It is a life-investment, and important.
>I want more than 2G of RAM but hear bad things about XP64 and audio. I
>use Cubase SX + many VST's and plugins and other apps. I don't want to
>get screwed.
Computers are not life investments. Computers today are disposable and
you need to treat them that way. Sadly, they are built to be disposable
too, both hardware and software.
>Is anyone here running Win XP 64 with problems? What about success?
I know plenty of folks who are using it, some are happy and some are
not. You need to know if Cubase users are happy with it, and you don't
really need to care about anyone else. Try the Cubase mailing list
and you should get some useful opinions.
But seriously, don't treat a modern PC as anything other than a temporary
product.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Mike Rivers
December 11th 07, 02:23 AM
On Dec 10, 8:14 pm, Ritual <Ritual> wrote:
> I am building a new computer. It is a life-investment
Do you have a terminal illness? Sorry to hear that. All computers are
obsolete in a couple of years, though they may continue to work longer
than that. There's no such thing as a life-investment when it comes to
computers (or anything contining one).
> I want more than 2G of RAM but hear bad things about XP64 and audio. I
> use Cubase SX + many VST's and plugins and other apps. I don't want to
> get screwed.
Then get plain old Windows XP. It'll work with anything you can buy
today and will continue to work as long as you don't replace your
software or hardware.
Mike Rivers
December 11th 07, 02:25 AM
On Dec 10, 8:50 pm, (Scott Dorsey) wrote:
> You need to know if Cubase users are happy with it, and you don't
> really need to care about anyone else.
He also needs to know if whatever audio hardware he buys has drivers
that are compatible with 64-bit Windows. Not all of it is.
Peter Larsen[_2_]
December 11th 07, 06:00 AM
Ritual wrote:
> I am building a new computer. It is a life-investment, and important.
Expected usable lifetime of a stationary: 6 years, expected usable lifetime
of a laptop: 3 years. They may last longer, but will be outdated.
> I want more than 2G of RAM but hear bad things about XP64 and audio. I
> use Cubase SX + many VST's and plugins and other apps. I don't want to
> get screwed.
It is bad business to do things in a way that is more costly than necessary.
> Is anyone here running Win XP 64 with problems? What about success?
To get that you need hardware and software that is guaranteed to work on it.
Most is "not supported on XP64", which means that the support advice if you
have a problem is to replace the OS.
> - Rit
Kind regards
Peter Larsen
Laurence Payne
December 11th 07, 10:06 AM
On Mon, 10 Dec 2007 20:14:32 -0500, Ritual <Ritual> wrote:
>
>I am building a new computer. It is a life-investment, and important.
No it isn't. Like any computer, it will be looking very ordinary
within 3 years, completely obsolete in 6. The trick is finding the
effecient price-point NOW.
Arny Krueger
December 11th 07, 01:10 PM
"Laurence Payne" <NOSPAMlpayne1ATdsl.pipex.com> wrote in
message
> On Mon, 10 Dec 2007 20:14:32 -0500, Ritual <Ritual> wrote:
>
>>
>> I am building a new computer. It is a life-investment,
>> and important.
> No it isn't. Like any computer, it will be looking very
> ordinary within 3 years, completely obsolete in 6. The
> trick is finding the efficient price-point NOW.
If we go with the 6-year life cycle we might have the following goals:
(1) Effective in some sense but probably not maximally efficient at the end
of life.
(2) Still pretty nice to work with at the three year point.
(3) Near-peak effectiveness when built. The most disappointing thing in the
world is a brand new tool that is broken, and its still possible to build a
new computer that is so bleeding-edge that it is not efficient.
That suggests to me that we do some limited overbuilding during initial
construction as compared to maximum efficiency up front.
Also, we plan on some mid-life kickers.
Getting back to 64 bits, the justification for 64 bits is before us - a 32
bit computer can't use all 4 gigs of 4 gigs of RAM for loading and running
programs and data. Even 3 GB is not universally supported by all application
programs.
The big price of 64 bits is the limited availability and stability of device
drivers and other system code.
The question is - do we need 4 GB to effectively run audio?
Certainly not true for reasonble (at least 32 channels) of multitracking.
I don't do MIDI, so I don't know about that.
Comments?
Scott Dorsey
December 11th 07, 02:39 PM
Arny Krueger > wrote:
>
>The big price of 64 bits is the limited availability and stability of device
>drivers and other system code.
>
>The question is - do we need 4 GB to effectively run audio?
>
>Certainly not true for reasonble (at least 32 channels) of multitracking.
I could see huge datasets being useful for reverb emulation, or if you
wanted a few day's worth of audio online. And I have worked on projects
where there were hundreds of hours of audio that needed to be dealt with.
Keeping all that in memory probably isn't essential but it wouldn't hurt.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Mr Soul
December 11th 07, 03:31 PM
> I want more than 2G of RAM but hear bad things about XP64 and audio. I
You can put more than 2 GB of RAM on 32-bit Windows XP?
> Is anyone here running Win XP 64 with problems? What about success?
I use it at work but if you want to do audio work and you want 64-bit/
>4 GB of RAM, then I would suggest looking at Vista. The audio
drivers will be supported on Vista whereas XP 64, they might work but
they won't be supported.
Mike
http://www.pcDAW.net
Geoff
December 11th 07, 09:58 PM
Mike Rivers wrote:
> On Dec 10, 8:50 pm, (Scott Dorsey) wrote:
>
>> You need to know if Cubase users are happy with it, and you don't
>> really need to care about anyone else.
>
> He also needs to know if whatever audio hardware he buys has drivers
> that are compatible with 64-bit Windows. Not all of it is.
Most isn't, and never will be.
geoff
Peter Larsen[_2_]
December 12th 07, 11:26 AM
Arny Krueger wrote:
> Getting back to 64 bits, the justification for 64 bits is before us -
> a 32 bit computer can't use all 4 gigs of 4 gigs of RAM for loading
> and running programs and data. Even 3 GB is not universally supported
> by all application programs.
I think it is a kludge they came up with for their SQL server, it is
incredibly good at gobbling up memory space when caching records.
Kind regards
Peter Larsen
William Sommerwerck
December 12th 07, 11:52 AM
> Getting back to 64 bits, the justification for 64 bits is before us -
> a 32 bit computer can't use all 4 gigs of 4 gigs of RAM for loading
> and running programs and data. Even 3 GB is not universally
> supported by all application programs.
The "bit size" of a computer refers to largest datum it can handle in a
single instruction. It has nothing, per se, to do with the size of the
address space.
Arny Krueger
December 12th 07, 12:30 PM
"William Sommerwerck" > wrote in
message
>> Getting back to 64 bits, the justification for 64 bits
>> is before us - a 32 bit computer can't use all 4 gigs of
>> 4 gigs of RAM for loading and running programs and data.
>> Even 3 GB is not universally supported by all
>> application programs.
>
> The "bit size" of a computer refers to largest datum it
> can handle in a single instruction. It has nothing, per
> se, to do with the size of the address space.
Actually, that hasn't been true for about 40 years.
Back in the late 60s, just about any self-respecting computer would support
single precision floating point, which was around 32 bits. Most would
support double-precision floating point, which implies data words on the
order of 60-64 bits. Many whould support quad-precision, or up to 128 bits.
Some would support character string data, which could be up to 256 bytes or
2048 bits.
However, the size of address pointers in computers of the day was often far
more limited. Therefore, the true measure of a computer was the size of the
main memory address register, not the data registers.
The most popular large computers of the day and many days since, were IBM
360s, which supported whopping big 24 bit addresses. Hey, that was 16
megabtes of RAM, an amount of memory that was nearly impossible to believe
could ever exist, given that RAM packaged to plugg in and run cost about
$2,000 a kilobyte. I still remember being a computer operator in the day,
standing next to GM's largest computer with an unbelievable 256 kilobytes of
RAM. The RAM module was a box big enough for 4 people to play cards in,
standing or sitting around a small table.
In the mid 1980s, fast mainframe RAM was down to something like $10,000 per
megabyte packaged to plug in and run, and 16 megabyte computers were
starting to be pretty common. 24 bit addressing wouldn't hack it, and there
was a wrenching change while operating systems and application programs were
first kluged and later re-written to exploit 32 bit addressing. The
change-over took several years. 4 gigabyte address spaces - only possible in
dreams and virtual memory, right? ;-)
And here we are today, where middle school kids are building computers in
their bedrooms with 4 GB of real RAM, and doing it with allowance money. 4
GB of fast RAM will probably sell for less than $100 some time next year or
early the year after.
Yawn! ;-)
Laurence Payne
December 12th 07, 12:36 PM
On Wed, 12 Dec 2007 07:30:05 -0500, "Arny Krueger" >
wrote:
>> The "bit size" of a computer refers to largest datum it
>> can handle in a single instruction. It has nothing, per
>> se, to do with the size of the address space.
>
>Actually, that hasn't been true for about 40 years.
>
>Back in the late 60s, just about any self-respecting computer would support
>single precision floating point, which was around 32 bits
But could it manipulate it in a single instruction? It's the same
issue with addressing. Sure, a 32-bit system could address any memory
size you liked. In a sense, it does it whenever it reads a large hard
drive. But could it do it quickly, without a paging or indexing layer
getting in the way?
William Sommerwerck
December 12th 07, 12:40 PM
"Arny Krueger" > wrote in message
. ..
> "William Sommerwerck" > wrote in
> message
>>> Getting back to 64 bits, the justification for 64 bits
>>> is before us - a 32 bit computer can't use all 4 gigs of
>>> 4 gigs of RAM for loading and running programs and data.
>>> Even 3 GB is not universally supported by all
>>> application programs.
>> The "bit size" of a computer refers to largest datum it
>> can handle in a single instruction. It has nothing, per
>> se, to do with the size of the address space.
> Actually, that hasn't been true for about 40 years.
<correct, as far as I know, information snipped>
I was thinking of microprocessors.
Arny Krueger
December 12th 07, 12:55 PM
"Laurence Payne" <NOSPAMlpayne1ATdsl.pipex.com> wrote in
message
> On Wed, 12 Dec 2007 07:30:05 -0500, "Arny Krueger"
> > wrote:
>
>>> The "bit size" of a computer refers to largest datum it
>>> can handle in a single instruction. It has nothing, per
>>> se, to do with the size of the address space.
>> Actually, that hasn't been true for about 40 years.
>> Back in the late 60s, just about any self-respecting
>> computer would support single precision floating point,
>> which was around 32 bits
> But could it manipulate it in a single instruction?
Yes.
There were even mainstream business computers like the CDC Cybers, that had
*only* 60 and 120 bit data words. Nothing shorter.
> It's the same issue with addressing. Sure, a 32-bit system
> could address any memory size you liked. In a sense, it
> does it whenever it reads a large hard drive. But could
> it do it quickly, without a paging or indexing layer
> getting in the way?
Yes. The IBM 360 instruction set had a full complement of instructions for
32, 64, 128 bit floating point, and up to 2k bit character data. The 360s
(other than the 67) were real memory computers, so no paging. The
instructions supported indexed addresses, but the address index registers
were static during instruction execution so there was no game-playing there.
Arny Krueger
December 12th 07, 01:03 PM
"William Sommerwerck" > wrote in
message
> "Arny Krueger" > wrote in message
> . ..
>> "William Sommerwerck" > wrote
>> in message
>>
>>>> Getting back to 64 bits, the justification for 64 bits
>>>> is before us - a 32 bit computer can't use all 4 gigs
>>>> of 4 gigs of RAM for loading and running programs and
>>>> data. Even 3 GB is not universally supported by all
>>>> application programs.
>
>>> The "bit size" of a computer refers to largest datum it
>>> can handle in a single instruction. It has nothing, per
>>> se, to do with the size of the address space.
>> Actually, that hasn't been true for about 40 years.
> <correct, as far as I know, information snipped>
> I was thinking of microprocessors.
Same problem, on the names and dates are changed. For example a 8088 was
called an 8 bit computer, but it had registers that were longer than 8
bits - at least 16 bits, and the effective addresses were more like 20 or 24
bits, if memory serves. 8 bits was the size of the external memory access
busses for main memory, but single instructions generally worked with longer
data and addressing registers.
The 8088 were more like 16 bit computers with multiplexors on the external
buses.
If memory serves, the 16 bit 8086 even came out first because it was
actually a little bit simpler chip.
Scott Dorsey
December 12th 07, 01:58 PM
In article >,
Laurence Payne <NOSPAMlpayne1ATdsl.pipex.com> wrote:
>On Wed, 12 Dec 2007 07:30:05 -0500, "Arny Krueger" >
>wrote:
>
>>> The "bit size" of a computer refers to largest datum it
>>> can handle in a single instruction. It has nothing, per
>>> se, to do with the size of the address space.
>>
>>Actually, that hasn't been true for about 40 years.
>>
>>Back in the late 60s, just about any self-respecting computer would support
>>single precision floating point, which was around 32 bits
>
>But could it manipulate it in a single instruction? It's the same
>issue with addressing. Sure, a 32-bit system could address any memory
>size you liked. In a sense, it does it whenever it reads a large hard
>drive. But could it do it quickly, without a paging or indexing layer
>getting in the way?
That's an implementation issue, not an architectural issue.
For example, the VAX had 32-bit long addresses... but many model machines
only had 22 address lines in the backplane. The architecture had 32-bit
addressing, but the physical machine was not capable of it.
On the other hand, the original 8086 had 16 bits available in the
instruction for the address, a legacy of the old 8080. But, it had
an address buss that was 20 bits wide. Intel managed to get around
this by using segment registers that would select the high bits of
the addresses in use; you could use a megabyte but only 64K at a time.
Now, that's nasty.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Scott Dorsey
December 12th 07, 02:11 PM
Arny Krueger > wrote:
>"Laurence Payne" <NOSPAMlpayne1ATdsl.pipex.com> wrote in
>message
>> On Wed, 12 Dec 2007 07:30:05 -0500, "Arny Krueger"
>> > wrote:
>>
>>>> The "bit size" of a computer refers to largest datum it
>>>> can handle in a single instruction. It has nothing, per
>>>> se, to do with the size of the address space.
>
>>> Actually, that hasn't been true for about 40 years.
>
>>> Back in the late 60s, just about any self-respecting
>>> computer would support single precision floating point,
>>> which was around 32 bits
>
>> But could it manipulate it in a single instruction?
>
>Yes.
>
>There were even mainstream business computers like the CDC Cybers, that had
>*only* 60 and 120 bit data words. Nothing shorter.
The Cyber was an abomination.
It had only 60-bit data words, true. But, they were only floats. If you
wanted to use an integer, you treated it like a float with a zero exponent.
So a single precision float was 60 bits, but an int was 48 bits.
Addresses were only 18 bits long, and if you moved an integer to an
address register it used only the lower 18 bits. Oh yeah, and since there
was no virtual memory, you're stuck with those 18 bits.
They did some trickery on the Cyber 180 series to allow virtual memory,
but nobody used it because it took forever for CDC to get their virtual
memory OS shipped, and when they finally did it was a bloated monstrosity
that nobody wanted. Sort of like TSS/370.
>> It's the same issue with addressing. Sure, a 32-bit system
>> could address any memory size you liked. In a sense, it
>> does it whenever it reads a large hard drive. But could
>> it do it quickly, without a paging or indexing layer
>> getting in the way?
>
>Yes. The IBM 360 instruction set had a full complement of instructions for
>32, 64, 128 bit floating point, and up to 2k bit character data. The 360s
>(other than the 67) were real memory computers, so no paging. The
>instructions supported indexed addresses, but the address index registers
>were static during instruction execution so there was no game-playing there.
Well, you could throw a DAT box on your 360/50... As I recall the 360
also had 16-bit addresses but it's been 30 years since I wrote an RS
or an RX...
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Arny Krueger
December 12th 07, 02:46 PM
"Scott Dorsey" > wrote in message
> In article >,
> Laurence Payne <NOSPAMlpayne1ATdsl.pipex.com> wrote:
>> On Wed, 12 Dec 2007 07:30:05 -0500, "Arny Krueger"
>> > wrote:
>>
>>>> The "bit size" of a computer refers to largest datum it
>>>> can handle in a single instruction. It has nothing, per
>>>> se, to do with the size of the address space.
>>>
>>> Actually, that hasn't been true for about 40 years.
>>>
>>> Back in the late 60s, just about any self-respecting
>>> computer would support single precision floating
>>> point, which was around 32 bits
>>
>> But could it manipulate it in a single instruction?
>> It's the same issue with addressing. Sure, a 32-bit
>> system could address any memory size you liked. In a
>> sense, it does it whenever it reads a large hard drive.
>> But could it do it quickly, without a paging or indexing
>> layer getting in the way?
>
> That's an implementation issue, not an architectural
> issue.
> For example, the VAX had 32-bit long addresses... but
> many model machines only had 22 address lines in the
> backplane. The architecture had 32-bit addressing, but
> the physical machine was not capable of it.
Which was the exact opposite of some later 370s, which could address more
physical storage than their OS's could access at one time.
> On the other hand, the original 8086 had 16 bits
> available in the instruction for the address, a legacy of
> the old 8080. But, it had
> an address buss that was 20 bits wide. Intel managed to
> get around
> this by using segment registers that would select the
> high bits of
> the addresses in use; you could use a megabyte but only
> 64K at a time. Now, that's nasty.
If you want nasty, consider the widely-used 360 architecture. You could use
16 megabytes, but only 4K or 8K at a time, depending on how you counted.
Doug McDonald
December 12th 07, 03:06 PM
Arny Krueger wrote:
> Yes. The IBM 360 instruction set had a full complement of instructions for
> 32, 64, 128 bit floating point, and up to 2k bit character data. The 360s
> (other than the 67) were real memory computers, so no paging. The
> instructions supported indexed addresses, but the address index registers
> were static during instruction execution so there was no game-playing there.
>
Not really. If you actually mean index registers, as opposed to
page registers, the standard Fortran compiler made exceedingly
extensive use of them. They are not called "index" registers
for nothing!
If you want disconnect between "word" size and "bits per
addressable unit" and memory size, I suggest you go back to
the early 1960 and consider the IBM 1620, which had
6 bits per "addressable unit (one decimal digit plus
"flag bit", and a parity bit too), 20, 40 or 60 thousand
digits of memory, and a "word size" that was variable
(suing the flag bit) from 2 to 20000 decimal digits.
Doug McDonald
Doug McDonald
December 12th 07, 03:13 PM
Arny Krueger wrote:
>
> If you want nasty, consider the widely-used 360 architecture. You could use
> 16 megabytes, but only 4K or 8K at a time, depending on how you counted.
>
>
That's not true. You could address only that as an offset inside
an instruction. You could address the whole address space
through other means, e.g. the index registers, which were
usable just as they are in say a Pentium or any RISC chip.
You just had to load them.
I suspect that they didn't give much thought to that
silly little construct the wacko professors talked about,
"the stack", and that was what the 4 K was for ... "nobody
would want bigger than a 4K stack". :-)
Doug McDonald
Arny Krueger
December 12th 07, 03:21 PM
"Doug McDonald" > wrote in
message
> Arny Krueger wrote:
>
>> Yes. The IBM 360 instruction set had a full complement
>> of instructions for 32, 64, 128 bit floating point, and
>> up to 2k bit character data. The 360s (other than the
>> 67) were real memory computers, so no paging. The
>> instructions supported indexed addresses, but the
>> address index registers were static during instruction
>> execution so there was no game-playing there.
> Not really. If you actually mean index registers, as
> opposed to page registers, the standard Fortran compiler
> made exceedingly extensive use of them. They are not
> called "index" registers for nothing!
I agree with your facts, but the span of the origional discussion was one
instruction, not a program or a subprogram. The base and index registers
were fixed during the execution of each instruction. If the instruction
changed them, the change was not effective until the next instruction
started. Not all instructions even had index registers for every operand
that referred to memory, if my increasingly foggy memory serves.
> If you want disconnect between "word" size and "bits per
> addressable unit" and memory size, I suggest you go back
> to the early 1960 and consider the IBM 1620, which had
> 6 bits per "addressable unit (one decimal digit plus
> "flag bit", and a parity bit too), 20, 40 or 60 thousand
> digits of memory, and a "word size" that was variable
> (suing the flag bit) from 2 to 20000 decimal digits.
1620? Very interesting machine. We called it "The CADET". Can't Add, Doesn't
Even Try!
Arny Krueger
December 12th 07, 03:37 PM
"Doug McDonald" > wrote in
message
> Arny Krueger wrote:
>
>> If you want nasty, consider the widely-used 360
>> architecture. You could use 16 megabytes, but only 4K or 8K at a time,
>> depending on how you counted.
> That's not true.
It was true in general practice.
> You could address only that as an offset
> inside an instruction.
That was the lowest common denominator for general programming.
Rule of thumb was that all programs were limited to 4K (prior to
link-editing) and all control blocks were limited to 4K.
> You could address the whole
> address space through other means, e.g. the index
> registers,
Not all instructions included index registers. RX_ instructions were a big
subset but it was hard to write a useful program with only Rx_ instructions.
> which were usable just as they are in say a
> Pentium or any RISC chip. You just had to load them.
Only relevant for RX_ instructions.
> I suspect that they didn't give much thought to that
> silly little construct the wacko professors talked about,
> "the stack", and that was what the 4 K was for ... "nobody
> would want bigger than a 4K stack". :-)
Frankly, I never heard that lecture - as I was programming professionally
years before CS became a subject that was taught in very many places.
Mr Soul
December 12th 07, 08:19 PM
I got a kick of your "any respectable computer" line. The IBM 360 was
a very expensive, mainframe computer, so that's kind of stating the
obvious to me. So yes, any expensive mainframe computer might have
all these characteristics.
> > But could it manipulate it in a single instruction?
>
> Yes.
It could do certain operations in a single instruction but not all.
> Yes. The IBM 360 instruction set had a full complement of instructions for
> 32, 64, 128 bit floating point, and up to 2k bit character data. The 360s
> (other than the 67) were real memory computers, so no paging. The
> instructions supported indexed addresses, but the address index registers
> were static during instruction execution so there was no game-playing there.
Well that's if you had the Scientific Instruction Set installed. This
implies software emulation to me, so I doubt if the processor itself
had these instructions.
Mike
Linton Yarbrough
December 13th 07, 08:08 AM
On Mon, 10 Dec 2007 20:14:32 -0500, Ritual wrote:
> I am building a new computer. It is a life-investment, and important.
You're ****ed out-of-the box then, no sucvh thing.
Arny Krueger
December 13th 07, 12:18 PM
"Mr Soul" > wrote in message
> I got a kick of your "any respectable computer" line.
> The IBM 360 was a very expensive, mainframe computer, so
> that's kind of stating the obvious to me.
Many of my comments applied to minicomputers, as well.
> So yes, any
> expensive mainframe computer might have all these
> characteristics.
In those days, all computers were expensive.
>>> But could it manipulate it in a single instruction?
>> Yes.
It could do certain operations in a single instruction
> but not all.
I don't know where you are headed with this. As a programmer, you coded one
instruction, and from your viewpoint, the instructions were executed
serially, one at a time.
What happened under the covers was what it was, but from the standpoint of a
programmer, Principles of Operations was true.
>> Yes. The IBM 360 instruction set had a full complement
>> of instructions for 32, 64, 128 bit floating point, and
>> up to 2k bit character data. The 360s (other than the
>> 67) were real memory computers, so no paging. The
>> instructions supported indexed addresses, but the
>> address index registers were static during instruction
>> execution so there was no game-playing there.
> Well that's if you had the Scientific Instruction Set
> installed.
The Scientific Instruction Set was standard in all but the very smallest
members of the line. It might have been optional in the order book in the
middle of the line, but try to get something like a Model 65 without it!
Also, IBM played the usual game - some so-called hardware features were
built into every box, but it took a hardware guy to turn them on if you
didn't order the CPU with it.
> This implies software emulation to me, so I
> doubt if the processor itself had these instructions.
It wasn't software emulation at all from the standpoint of the operator or
programmer.
Of course 360 featured microcode, and that blurs some lines. The
architecture of the hardware that executed the microcode was vastly
different from the architecture that the programmer and operator saw.
For example, the 360/20 and 30 hardware accumulator was only 1 byte wide and
had only 4 functions. Therefore, just about every 360 instruction involved
several microcoded steps on the 20. However, the processor that executed the
microcode used a very long instruction (ca. 60 bits), and bore no
resemblence at all to a System 360.
On some models the same microcoded processor could be switched to run a
different set of microcode, which did a very effective job of running
programs that were written for 1401s.
BTW, the 360/20 and the 360/30 did not share much in the way of hardware.
The 30 and the 50 were more similar to each other. The 20, the 40, and the
disk controllers were very similar under the covers. Then there was the 22
which was really a lobotomized 30.
I recall that Lockheed bought a bunch of small 360s I think 30s, licensed
the microcode technology, and built machines that implemented yet another
different architecture.
The larger 360s did run software simulators. We dredged up a 7074 simulator
that was originally written for OS/PCP running on a 360/50 and made it run
on a 3033 under MVS when our company found out it had some legacy
application code that it really needed to run. A simulator is a completely
different thing than microcode.
IBM blurred that line too, and had simulators that interacted with microcode
assists to improve performance for frequently-used operations. A microcode
assist was usually invoked with an esoteric op code or esoteric parameters
for a documented op code like DIAG.
AFAIK there were at least a few microcoded instructions in every model. But,
the larger boxes had real dedicated hardware for data manipulation. There
were definately boxes with hardware floating point. I think that started
around the 65.
Mr Soul
December 13th 07, 01:08 PM
> Many of my comments applied to minicomputers, as well.
What mini-computers existed in the 60's which is the period you were
talking about? I'm not aware of any but then again, I was only 10 in
65.
> In those days, all computers were expensive.
Like I said, the obvious.
> I don't know where you are headed with this. As a programmer, you coded one
> instruction, and from your viewpoint, the instructions were executed
> serially, one at a time.
I am talking one instruction for doing an operation like say add. I
do believe IBM 360 could do floating point math in 1 instruction.
> It wasn't software emulation at all from the standpoint of the operator or
> programmer.
I am talking about the machine, not the programmer. From my reading,
floating point math in the 360 was done in software not hardware (but
I could be wrong).
I only used the 360 for a short time (in the 80's) before I started
using using a VAX, so I am not an expert on them but I looked up the
information on the net.
Mike
Arny Krueger
December 13th 07, 01:56 PM
"Mr Soul" > wrote in message
>> Many of my comments applied to minicomputers, as well.
> What mini-computers existed in the 60's which is the
> period you were talking about? I'm not aware of any but
> then again, I was only 10 in 65.
DEC early PDP series and Data General.
>> In those days, all computers were expensive.
> Like I said, the obvious.
>> I don't know where you are headed with this. As a
>> programmer, you coded one instruction, and from your
>> viewpoint, the instructions were executed serially, one
>> at a time.
> I am talking one instruction for doing an operation like
> say add. I do believe IBM 360 could do floating point
> math in 1 instruction.
The usual floating point machine instruction was a register-register, or
register-storage instruction. The standard programming was something like:
Load accumulator or floating point register
Modify accumulator or register with data from storage or another register
Store accumulator or register to storage if required.
That would be three machine instructions, whether they were microcoded or
hard-wired.
>> It wasn't software emulation at all from the standpoint
>> of the operator or programmer.
> I am talking about the machine, not the programmer.
What the machine did under the covers varied with the machine. The
instruction set was the interface between most humans and the machine.
> From my reading, floating point math in the 360 was done
> in software not hardware (but I could be wrong).
The 360s used microcode, which we now call firmware, not software. 360s
did not use software simulation as the term is usually used, to do floating
point arithmetic. However, it is easy to confuse microcode or firmware with
software, because both are programs.
Prior to the 360, even IBM's smallest machines were hard-wired. I remember
looking at the schematic for the arithmetic unit of a 1460, which was a
small business machine.
> I only used the 360 for a short time (in the 80's) before
> I started using using a VAX, so I am not an expert on
> them but I looked up the information on the net.
The VAX used microcode.
This article may help clarify things:
http://en.wikipedia.org/wiki/Microcode
Scott Dorsey
December 13th 07, 03:23 PM
Arny Krueger > wrote:
>"Mr Soul" > wrote in message
>
>> I got a kick of your "any respectable computer" line.
>
>> The IBM 360 was a very expensive, mainframe computer, so
>> that's kind of stating the obvious to me.
>
>Many of my comments applied to minicomputers, as well.
The 360 was a whole series of computers that all (mostly) shared the
same instruction set. So you could buy a cheap bargain basement machine
like the 360/30 with very little up-front investment, and then later on
upgrade all the way up to a very expensive 360/192, while keeping the
same peripherals and the same software.
Unless you bought a 360/44, in which case you were screwed.
IBM originated this whole idea. A line of very different computers with
very different architectures, at different price points and with different
semiconductor technologies, but built with microcode to allow them all to
run the same software.
This concept is what made the computer industry what it is today, really.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Scott Dorsey
December 13th 07, 03:27 PM
Arny Krueger > wrote:
>
>The 360s used microcode, which we now call firmware, not software. 360s
>did not use software simulation as the term is usually used, to do floating
>point arithmetic. However, it is easy to confuse microcode or firmware with
>software, because both are programs.
Microcode is not software, or firmware. It's microcode.
And I believe there was at least one 360 model that was not microcoded.
IBM, though, was the first one to use microcode to decouple the instruction
set and the hardware architecture. That's a a big deal.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Mr Soul
December 13th 07, 03:48 PM
> DEC early PDP series and Data General.
Right. The PDP-11 was the first computer that I programmed on as a
software engineer outside of school. However, it did not have built-
in floating point instructions either, there was an extra FPP.
> The usual floating point machine instruction was a register-register, or
> register-storage instruction. The standard programming was something like:
>
> Load accumulator or floating point register
> Modify accumulator or register with data from storage or another register
> Store accumulator or register to storage if required.
That is correct.
> What the machine did under the covers varied with the machine. The
> instruction set was the interface between most humans and the machine.
Sorry when I said "machine" I meant "machine instruction" which is the
human interface.
>
> > From my reading, floating point math in the 360 was done
> > in software not hardware (but I could be wrong).
>
> The 360s used microcode, which we now call firmware, not software. 360s
> did not use software simulation as the term is usually used, to do floating
> point arithmetic. However, it is easy to confuse microcode or firmware with
> software, because both are programs.
I know what microcode is & I've programmed in assembler before.
From Wikipedia: "Confusingly, some today hardware vendors, especially
IBM, use microcode as a synonym of a firmware, whether it actually
implements the microprogramming of a processor or not.[1] "
Mike
Arny Krueger
December 13th 07, 04:01 PM
"Scott Dorsey" > wrote in message
> Arny Krueger > wrote:
>>
>> The 360s used microcode, which we now call firmware,
>> not software. 360s did not use software simulation as
>> the term is usually used, to do floating point
>> arithmetic. However, it is easy to confuse microcode or
>> firmware with software, because both are programs.
> Microcode is not software, or firmware. It's microcode.
Firmware is a superset of microcode.
> And I believe there was at least one 360 model that was
> not microcoded.
I'm not sure about that. The 93 had very little microcode, but I believe
that there was some in the I/O department.
> IBM, though, was the first one to use microcode to
> decouple the instruction set and the hardware
> architecture. That's a a big deal. --scott
This article says maybe a little different, and I recall some of what they
present.
http://en.wikipedia.org/wiki/Microcode
"In 1947, the design of the MIT Whirlwind introduced the concept of a
control store as a way to simplify computer design and move beyond ad hoc
methods. The control store was a two-dimensional lattice: one dimension
accepted "control time pulses" from the CPU's internal clock, and the other
connected to control signals on gates and other circuits. A "pulse
distributor" would take the pulses generated by the CPU clock and break them
up into eight separate time pulses, each of which would activate a different
row of the lattice. When the row was activated, it would activate the
control signals connected to it"
IBM was the first to go so whole hog on microcode.
Arny Krueger
December 13th 07, 04:05 PM
"Mr Soul" > wrote in message
>> DEC early PDP series and Data General.
> Right. The PDP-11 was the first computer that I
> programmed on as a software engineer outside of school.
> However, it did not have built- in floating point
> instructions either, there was an extra FPP.
In the PDP, I suspect that the FPP was microcoded. FPP didn't have to be
microcoded, but they often were.
>> The usual floating point machine instruction was a
>> register-register, or register-storage instruction. The
>> standard programming was something like:
>> Load accumulator or floating point register
>> Modify accumulator or register with data from storage or
>> another register Store accumulator or register to
>> storage if required.
> That is correct.
>>> From my reading, floating point math in the 360 was done
>>> in software not hardware (but I could be wrong).
>> The 360s used microcode, which we now call firmware,
>> not software. 360s did not use software simulation as
>> the term is usually used, to do floating point
>> arithmetic. However, it is easy to confuse microcode or
>> firmware with software, because both are programs.
> I know what microcode is & I've programmed in assembler
> before.
> From Wikipedia: "Confusingly, some today hardware
> vendors, especially IBM, use microcode as a synonym of a
> firmware, whether it actually implements the
> microprogramming of a processor or not.[1] "
Right, firmware is often a superset of microprogramming.
AFAIK current SOTA microprocessors are usually heavily microcoded, less so
if they are RISC.
Scott Dorsey
December 13th 07, 04:23 PM
Arny Krueger > wrote:
>"Scott Dorsey" > wrote in message
>> Arny Krueger > wrote:
>>>
>>> The 360s used microcode, which we now call firmware,
>>> not software. 360s did not use software simulation as
>>> the term is usually used, to do floating point
>>> arithmetic. However, it is easy to confuse microcode or
>>> firmware with software, because both are programs.
>
>> Microcode is not software, or firmware. It's microcode.
>
>Firmware is a superset of microcode.
But what if the microcode isn't in ROM? What if the machine has a
writable control store?
>> IBM, though, was the first one to use microcode to
>> decouple the instruction set and the hardware
>> architecture. That's a a big deal. --scott
>
>This article says maybe a little different, and I recall some of what they
>present.
>
>http://en.wikipedia.org/wiki/Microcode
While Whirlwind used microcode, they used microcode just to enhance the
instruction set and make a machine easier to program. The /360 machines
used microcode to completely obscure the architecture and to allow one
instruction set to be used on a while series of machines with totally
different architectures. That's a milestone.
Ironically, it was also IBM who figured out that large microcoded
instruction sets were giving them a serious performance penalty due to
decoding and scheduling issues, and built the 801 with a reduced
directly-decoded instruction set. Of course, like most IBM developments,
their competition took the idea and ran with it....
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Peter A. Stoll[_2_]
December 13th 07, 09:27 PM
"Arny Krueger" > wrote in
:
>
> This article says maybe a little different, and I recall some of what
> they present.
>
> http://en.wikipedia.org/wiki/Microcode
>
> "In 1947, the design of the MIT Whirlwind introduced the concept of a
> control store as a way to simplify computer design and move beyond ad
> hoc methods.
Interesting that the Whirlwind precedent in this area gets more attention
these days.
I recall Wilkes as being the early reference when I encountered microcode.
in the 60's and 70's. Whirlwind got mentioned for core memory, research
budget consumption, vacuum tube reliability, and some other things, but not
for contributions to structured ways of doing control logic. Certainly no
breath of it in microcode discussions.
Later I read a discussion of the structured logic in Whirlwind someplace,
and still later, I was asked to review a chapter on microprogramming for
the first edition of Patterson and Hennessey. That book as drafted used
very extensive and detailed 8086 examples, which is probably why I was
asked to review it. They disappeared from the published text, possibly
because Intel declined release permission, but a reference to Whirlwind was
added, possibly triggered by my written comment that it was an antecedent
which invalidated a too-broad precedence claim in the draft text.
On the other hand, this precedence discussion is sort of like the various
arguments that Columbus did not "really" open the path for sea traffic from
Europe to the Americas. Well, he did, not because he was the first to
travel, but because word of his travel triggered directly the subsequent
explosion of activity.
I'd only alter slightly Scott's emphasis on 360 microcode as enabling
instruction set consistency. I think in the context of the times, the
really novel idea was that one would even want to have instruction set
consistency, especiallly across so broad a product line. That may seem an
obvious desideratum now, but then it was hardly on the horizon. Once you
decided you wanted to do that, microcode was an implementation method which
helped the dollar-dominant member of the family (the model 30) cope with
the instruction set complexity.
David A Stocks
December 19th 07, 12:43 PM
"Arny Krueger" > wrote in message
. ..
>
> In the mid 1980s, fast mainframe RAM was down to something like $10,000
> per megabyte packaged to plug in and run, and 16 megabyte computers were
> starting to be pretty common. 24 bit addressing wouldn't hack it, and
> there was a wrenching change while operating systems and application
> programs were first kluged and later re-written to exploit 32 bit
> addressing. The change-over took several years. 4 gigabyte address
> spaces - only possible in dreams and virtual memory, right? ;-)
>
> And here we are today, where middle school kids are building computers in
> their bedrooms with 4 GB of real RAM, and doing it with allowance money. 4
> GB of fast RAM will probably sell for less than $100 some time next year
> or early the year after.
>
It's much the same with disk storage. About 12 years ago someone around here
came up with a scheme to do an upgrade on what was effectively a file
server, to give me another 80GB of online storage for some systems I was
running at the time. The upgrade cost £80,000. In a previous career I worked
on a key-to-disk system that had been built in the mid-1970s for data entry
arising from the US population census. The machine had a disk drive the size
of a washing machine which took replaceable disk platters, capacity 30MB
when formatted. I suspect these platters cost £thousands each, and when they
were new each drive had cost around £250,000.
D A Stocks
vBulletin® v3.6.4, Copyright ©2000-2025, Jelsoft Enterprises Ltd.