Reply
 
Thread Tools Display Modes
  #1   Report Post  
Posted to rec.audio.pro
adam79 adam79 is offline
external usenet poster
 
Posts: 308
Default System Hard Drive RPMs

Hey. I have a 2nd hard drive that 7200 RPMs for recording audio tracks
(I use PT LE). I'm updating my computer and had a quick question. Does
the speed of the system hard drive make a difference of the performace
of Pro Tools. The stock system hard drive in the Macbook Pro I'm looking
at is 5400 RPMs. Is it really worth the $225 extra dollars to get the
system hard drive to run at 7200 RPMs?

Thanks,
-Adam
  #2   Report Post  
Posted to rec.audio.pro
Richard Crowley Richard Crowley is offline
external usenet poster
 
Posts: 4,172
Default System Hard Drive RPMs

"adam79" wrote ...
Hey. I have a 2nd hard drive that 7200 RPMs for recording audio tracks
(I use PT LE). I'm updating my computer and had a quick question. Does
the speed of the system hard drive make a difference of the performace
of Pro Tools. The stock system hard drive in the Macbook Pro I'm
looking at is 5400 RPMs. Is it really worth the $225 extra dollars to
get the system hard drive to run at 7200 RPMs?


If you are storing audio to a separate drive, it would seem
unlikely that the RPM of the system drive would have any
significant effect on performance.

Furthermore, modern drives are so dense that the actual
throughput (which is all that matters) is much higher, even
for drives with lower RPMs.

  #3   Report Post  
Posted to rec.audio.pro
adam79 adam79 is offline
external usenet poster
 
Posts: 308
Default System Hard Drive RPMs

Soundhaspriority wrote:

I can tell you this: I have recorded 12 channels at 24/96 onto an external
USB drive at 5400 rpm. The more likely source of difficulty is use of the
system drive for recording. With Windows XP, this can cause problems. Any
comments from MacBook users?


i've recorded like 8 tracks on the system drive. i'm currently using a
macbook pro with a 7200 rpm system drive. i had no problems. i was just
too lazy to bring out the 2nd hard drive.. however it was just
tracking.. i didn't really bother mixing or adding plugins.. it was more
for a reference recording so we wouldn't forget the songs we were writing.
  #4   Report Post  
Posted to rec.audio.pro
D C D C is offline
external usenet poster
 
Posts: 34
Default System Hard Drive RPMs

adam79 wrote:

i've recorded like 8 tracks on the system drive. i'm currently using a
macbook pro with a 7200 rpm system drive. i had no problems.



7200 is the rule of thumb I've always heard. I have seen your posts
about getting rid of the Mac Book Pro and getting a Mini. I wouldn't. In
my limited playing with them at the Apple Store, they couldn't get put
of their own way.
  #5   Report Post  
Posted to rec.audio.pro
Mike Rivers Mike Rivers is offline
external usenet poster
 
Posts: 8,744
Default System Hard Drive RPMs

On Oct 21, 9:57 pm, D C wrote:

7200 is the rule of thumb I've always heard.


It depends on what you want to record. That's a pretty good rule of
thumb for 24 tracks at 24-bit 44.1 kHz sample rate, with the ability
to punch in a few tracks simultaneously. I was recording 16-bit 44.1
kHz stereo just fine on a 4200 RPM drive when one of those newfangled
5400 RPM drives that everyone said I needed cost about as much as the
whole computer.



  #6   Report Post  
Posted to rec.audio.pro
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default System Hard Drive RPMs


"Chel van Gennip" wrote in message
...
D C wrote:

7200 is the rule of thumb I've always heard.


RPM does not tell the story.


Agreed. It's an indicator, but not the whole story.

The only thing that matters is the transfer rate: MBytes/second.

That would be the effective transfer rate. One way to shoot transfer rate in
the foot is to have a second independent process that is putting seeks on
the drive. If your system has only 1 drive and is swapping heavily while
you are recording, then you are cruising for a bruising.

One channel at 24/48k will take about 0.15 MByte/second so a drive that
does 20MByte/second (and many 2.5" drives at 5400RPM or CF flash cards do
that or better) will support about 140 channels at that resolution.


Like you say, better. Transfer rates up in the 60 meg/second are not
uncommon with commodity drives.

Another source of loss of DTR capacity is filling up the hard drive. Hard
drives are 3-5 times slower when operating on their inner tracks as opposed
to the outer tracks.

Rules of thumb from a time HD recording densitiy resulted in 30MByte
drives won't work when recording density has increased so HD's can be
hundreds of gigabytes.


I still remember paying $600 for a 20 MB hard drive that was delivered
with a poorly-configured chip on the controller board that made it only a
little faster than a floppy. Now, 200 GB (x 10,000) is only marginally
commercial.


  #7   Report Post  
Posted to rec.audio.pro
Laurence Payne Laurence Payne is offline
external usenet poster
 
Posts: 2,824
Default System Hard Drive RPMs

On Mon, 22 Oct 2007 08:48:00 -0400, "Arny Krueger"
wrote:

That would be the effective transfer rate. One way to shoot transfer rate in
the foot is to have a second independent process that is putting seeks on
the drive. If your system has only 1 drive and is swapping heavily while
you are recording, then you are cruising for a bruising.


But why on Earth WOULD it be "swapping heavily" (whatever that means?)
Sure, you can sabotage any recording by running another disk-intensive
process behind it. Which is why you don't :-)
  #8   Report Post  
Posted to rec.audio.pro
Scott Dorsey Scott Dorsey is offline
external usenet poster
 
Posts: 16,853
Default System Hard Drive RPMs

Laurence Payne NOSPAMlpayne1ATdsl.pipex.com wrote:
On Mon, 22 Oct 2007 08:48:00 -0400, "Arny Krueger"
wrote:

That would be the effective transfer rate. One way to shoot transfer rate in
the foot is to have a second independent process that is putting seeks on
the drive. If your system has only 1 drive and is swapping heavily while
you are recording, then you are cruising for a bruising.


But why on Earth WOULD it be "swapping heavily" (whatever that means?)


Okay, virtual memory systems do two things: first of all they page out
single pages of memory that isn't in use in order to page in memory that
you want to use. Secondly, when things get really bad, they swap entire
processes out.

http://www.netjeff.com/humor/item.cgi?file=TheThingKing

Sure, you can sabotage any recording by running another disk-intensive
process behind it. Which is why you don't :-)


The issue is that when you have memory-intensive processes and you have
insufficient memory, the virtual memory system goes to disk. And disk is
a lot slower than core, so you spend a lot of time thrashing. This is
a consequence of bloated applications that use far more memory than they
should (like PT), but it can easily be solved by throwing hardware at the
problem and purchasing more memory.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
  #9   Report Post  
Posted to rec.audio.pro
Sean Conolly Sean Conolly is offline
external usenet poster
 
Posts: 638
Default System Hard Drive RPMs

"Laurence Payne" NOSPAMlpayne1ATdsl.pipex.com wrote in message
...
On Mon, 22 Oct 2007 08:48:00 -0400, "Arny Krueger"
wrote:

That would be the effective transfer rate. One way to shoot transfer rate
in
the foot is to have a second independent process that is putting seeks on
the drive. If your system has only 1 drive and is swapping heavily while
you are recording, then you are cruising for a bruising.


But why on Earth WOULD it be "swapping heavily" (whatever that means?)
Sure, you can sabotage any recording by running another disk-intensive
process behind it. Which is why you don't :-)


Page swaps is one good source of background writes, and you can have a lot
of swap activity well before you out of physical memory (at least on
windows, don't know about mac).

Sean


  #10   Report Post  
Posted to rec.audio.pro
Laurence Payne Laurence Payne is offline
external usenet poster
 
Posts: 2,824
Default System Hard Drive RPMs

On 22 Oct 2007 09:48:17 -0400, (Scott Dorsey) wrote:

That would be the effective transfer rate. One way to shoot transfer rate in
the foot is to have a second independent process that is putting seeks on
the drive. If your system has only 1 drive and is swapping heavily while
you are recording, then you are cruising for a bruising.


But why on Earth WOULD it be "swapping heavily" (whatever that means?)


Okay, virtual memory systems do two things: first of all they page out
single pages of memory that isn't in use in order to page in memory that
you want to use. Secondly, when things get really bad, they swap entire
processes out.

http://www.netjeff.com/humor/item.cgi?file=TheThingKing

Sure, you can sabotage any recording by running another disk-intensive
process behind it. Which is why you don't :-)


The issue is that when you have memory-intensive processes and you have
insufficient memory, the virtual memory system goes to disk. And disk is
a lot slower than core, so you spend a lot of time thrashing. This is
a consequence of bloated applications that use far more memory than they
should (like PT), but it can easily be solved by throwing hardware at the
problem and purchasing more memory.


And, as I've had to say FAR too many times before when the old bogey
of swapping has been cited, it's having systems with sufficient RAM
that this WON'T happen that has enabled reliable use of a standard
computer as a DAW. Program code is now tiny compared with physical
RAM.


  #11   Report Post  
Posted to rec.audio.pro
Laurence Payne Laurence Payne is offline
external usenet poster
 
Posts: 2,824
Default System Hard Drive RPMs

On Mon, 22 Oct 2007 09:50:42 -0400, "Sean Conolly"
wrote:

But why on Earth WOULD it be "swapping heavily" (whatever that means?)
Sure, you can sabotage any recording by running another disk-intensive
process behind it. Which is why you don't :-)


Page swaps is one good source of background writes, and you can have a lot
of swap activity well before you out of physical memory (at least on
windows, don't know about mac).


As (again) I've had to say too many times: on a PC with adequate RAM
(and there's no excuse these days for a DAW not to have this) go into
System and completely disable the on-disk paging file. Don't argue it
can't be done, or get tied up in the two usages of the term "virtual
memory". Just do it. Then reboot, and watch your computer perform
precisely as before. (A few programs do like to see a paging file on
disk, though I don't think they use it for what you think they do. So,
having proved the point, reset it.)
  #12   Report Post  
Posted to rec.audio.pro
Scott Dorsey Scott Dorsey is offline
external usenet poster
 
Posts: 16,853
Default System Hard Drive RPMs

Laurence Payne NOSPAMlpayne1ATdsl.pipex.com wrote:

And, as I've had to say FAR too many times before when the old bogey
of swapping has been cited, it's having systems with sufficient RAM
that this WON'T happen that has enabled reliable use of a standard
computer as a DAW. Program code is now tiny compared with physical
RAM.


Sure, but data sets are often huge when compared with physical ram. It's
easy to make a PT system thrash if you throw enough tracks onto it.

No matter how fast and large computers get, users always figure out ways to
make them slow.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
  #14   Report Post  
Posted to rec.audio.pro
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default System Hard Drive RPMs


"Laurence Payne" NOSPAMlpayne1ATdsl.pipex.com wrote in message
...

On Mon, 22 Oct 2007 08:48:00 -0400, "Arny Krueger"
wrote:


That would be the effective transfer rate. One way to shoot transfer rate
in
the foot is to have a second independent process that is putting seeks on
the drive. If your system has only 1 drive and is swapping heavily while
you are recording, then you are cruising for a bruising.


But why on Earth WOULD it be "swapping heavily" (whatever that means?)


Lack of RAM.

Sure, you can sabotage any recording by running another disk-intensive
process behind it.


Or memory intensive when RAM is insufficient.

Which is why you don't :-)


Agreed.

Some of my comments are based on the fact that the last two machines I
worked on were a 64meg Win Me machine and a 256 meg XP machine. Both would
sit an an empty desktop, no programs running, and just swap a little.


  #15   Report Post  
Posted to rec.audio.pro
Scott Dorsey Scott Dorsey is offline
external usenet poster
 
Posts: 16,853
Default System Hard Drive RPMs

Laurence Payne NOSPAMlpayne1ATdsl.pipex.com wrote:
On 22 Oct 2007 11:06:02 -0400, (Scott Dorsey) wrote:

Sure, but data sets are often huge when compared with physical ram. It's
easy to make a PT system thrash if you throw enough tracks onto it.

No matter how fast and large computers get, users always figure out ways to
make them slow.


Does PT have particularly bad design in this area? Every multitrack
program I've worked with is designed to stream data to/from disk as
required. Given ample RAM, the program and/or os can be pretty clever
about caching, cutting down disk activity if you're continually
rolling over a particular section of music. But I don't see how
"thrashing" comes into it?


Ideally what you want to do is cache as much of the data set in memory
as possible. That means you aren't living by the whims of the
(nondeterministic) disk access time. But if you cache _too much_, the
core gets swapped out to disk and then you're back where you started.

Remember, the paging and swapping is done _by the operating system_ without
the application having any control over it. This isn't a realtime system,
this is Windows. So the application has no idea if it's going to make
deadline or not and there's no way it can ask the OS to find out. Consequently
we just throw hardware at the problem and everything is fine for a while.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."


  #16   Report Post  
Posted to rec.audio.pro
Laurence Payne Laurence Payne is offline
external usenet poster
 
Posts: 2,824
Default System Hard Drive RPMs

On Mon, 22 Oct 2007 11:50:02 -0400, "Arny Krueger"
wrote:

But why on Earth WOULD it be "swapping heavily" (whatever that means?)


Lack of RAM.

Sure, you can sabotage any recording by running another disk-intensive
process behind it.


Or memory intensive when RAM is insufficient.

Which is why you don't :-)


Agreed.

Some of my comments are based on the fact that the last two machines I
worked on were a 64meg Win Me machine and a 256 meg XP machine. Both would
sit an an empty desktop, no programs running, and just swap a little.


Well, don't forget to preface future opinions with "BTW, I'm talking
about what happens on obsolete gear - not how it is now on an adequate
system!"
  #17   Report Post  
Posted to rec.audio.pro
Laurence Payne Laurence Payne is offline
external usenet poster
 
Posts: 2,824
Default System Hard Drive RPMs

On 22 Oct 2007 11:53:23 -0400, (Scott Dorsey) wrote:

Does PT have particularly bad design in this area? Every multitrack
program I've worked with is designed to stream data to/from disk as
required. Given ample RAM, the program and/or os can be pretty clever
about caching, cutting down disk activity if you're continually
rolling over a particular section of music. But I don't see how
"thrashing" comes into it?


Ideally what you want to do is cache as much of the data set in memory
as possible. That means you aren't living by the whims of the
(nondeterministic) disk access time. But if you cache _too much_, the
core gets swapped out to disk and then you're back where you started.

Remember, the paging and swapping is done _by the operating system_ without
the application having any control over it. This isn't a realtime system,
this is Windows. So the application has no idea if it's going to make
deadline or not and there's no way it can ask the OS to find out. Consequently
we just throw hardware at the problem and everything is fine for a while.
--scott


Sure. As much AS POSSIBLE. Are you suggesting PT or Windows affords
data caching the same priority as it gives program code? That
whenever a program accesses a data file larger than available RAM,
core will be swapped to disk in the interests of caching every last
possible byte of data? I think not :-)

Are you maybe falling into the common trap of thinking Virtual Memory
= paging file?
  #18   Report Post  
Posted to rec.audio.pro
Scott Dorsey Scott Dorsey is offline
external usenet poster
 
Posts: 16,853
Default System Hard Drive RPMs

Laurence Payne NOSPAMlpayne1ATdsl.pipex.com wrote:
Well, don't forget to preface future opinions with "BTW, I'm talking
about what happens on obsolete gear - not how it is now on an adequate
system!"


That's the thing. No matter WHAT you buy, sooner or later (and in the
computer world it's invariably sooner) it will be obsolete.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
  #19   Report Post  
Posted to rec.audio.pro
Scott Dorsey Scott Dorsey is offline
external usenet poster
 
Posts: 16,853
Default System Hard Drive RPMs

Laurence Payne NOSPAMlpayne1ATdsl.pipex.com wrote:

Sure. As much AS POSSIBLE. Are you suggesting PT or Windows affords
data caching the same priority as it gives program code?


It's not data caching. It's virtual memory. It's done by Windows, and
the application has no control over it.

That
whenever a program accesses a data file larger than available RAM,
core will be swapped to disk in the interests of caching every last
possible byte of data? I think not :-)


No, you have it backwards. The program has a large address space that is
available for it to use. It is larger than the physical core space on
the computer. When a program goes to access memory that is not currently
swapped in, it swaps a page out to the page file, and swaps in one that
contains the memory block the application wants. In this way, the program
sees a very large address space without the computer actually needing to
have so much space in core.

Are you maybe falling into the common trap of thinking Virtual Memory
= paging file?


The paging file PLUS the physical memory IS the virtual memory space
available. This is early-1970s technology we are talking about here.

Many systems that are designed for realtime applications have very different
memory management, because on realtime operating systems the application
tells the operating system how much time it's willing to spend on each task
and the OS schedules things appropriately so all processes can meet deadline.
That's mid-1980s technology, and you will see it applied in realtime systems
today like pSOS and BeOS.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
  #20   Report Post  
Posted to rec.audio.pro
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default System Hard Drive RPMs


"Scott Dorsey" wrote in message
...
Laurence Payne NOSPAMlpayne1ATdsl.pipex.com wrote:

Sure. As much AS POSSIBLE. Are you suggesting PT or Windows affords
data caching the same priority as it gives program code?


It's not data caching. It's virtual memory. It's done by Windows, and
the application has no control over it.

That
whenever a program accesses a data file larger than available RAM,
core will be swapped to disk in the interests of caching every last
possible byte of data? I think not :-)


No, you have it backwards. The program has a large address space that is
available for it to use. It is larger than the physical core space on
the computer. When a program goes to access memory that is not currently
swapped in, it swaps a page out to the page file, and swaps in one that
contains the memory block the application wants. In this way, the program
sees a very large address space without the computer actually needing to
have so much space in core.

Are you maybe falling into the common trap of thinking Virtual Memory
= paging file?


The paging file PLUS the physical memory IS the virtual memory space
available. This is early-1970s technology we are talking about here.


The virtual memory space available can easily exceed the sum of physical
memory and the paging file. Just because there is address space that is in
some sense available doesn't mean that it has to be backed up with physical
memory.




  #22   Report Post  
Posted to rec.audio.pro
Mike Rivers Mike Rivers is offline
external usenet poster
 
Posts: 8,744
Default System Hard Drive RPMs

On Oct 22, 2:58 pm, (Scott Dorsey) wrote:

That's the thing. No matter WHAT you buy, sooner or later (and in the
computer world it's invariably sooner) it will be obsolete.


Almost always sooner than later.

  #24   Report Post  
Posted to rec.audio.pro
Mogens V. Mogens V. is offline
external usenet poster
 
Posts: 375
Default System Hard Drive RPMs

Arny Krueger wrote:
"Chel van Gennip" wrote in message
...

D C wrote:


7200 is the rule of thumb I've always heard.



RPM does not tell the story.



Agreed. It's an indicator, but not the whole story.

The only thing that matters is the transfer rate: MBytes/second.

That would be the effective transfer rate. One way to shoot transfer rate in
the foot is to have a second independent process that is putting seeks on
the drive. If your system has only 1 drive and is swapping heavily while
you are recording, then you are cruising for a bruising.


One channel at 24/48k will take about 0.15 MByte/second so a drive that
does 20MByte/second (and many 2.5" drives at 5400RPM or CF flash cards do
that or better) will support about 140 channels at that resolution.



Like you say, better. Transfer rates up in the 60 meg/second are not
uncommon with commodity drives.


For one contigious file, yes. Writing/reading to more locations in
smaller chunks can easily lower the effective transfer rate.
Anyone have refs to how much queuing mechanisms (today mostly SATA NCQ)
matters for a DAW? Preferably for separate system and data drives.

--
Kind regards,
Mogens V.

  #25   Report Post  
Posted to rec.audio.pro
Mogens V. Mogens V. is offline
external usenet poster
 
Posts: 375
Default System Hard Drive RPMs

Scott Dorsey wrote:
Laurence Payne NOSPAMlpayne1ATdsl.pipex.com wrote:

And, as I've had to say FAR too many times before when the old bogey
of swapping has been cited, it's having systems with sufficient RAM
that this WON'T happen that has enabled reliable use of a standard
computer as a DAW. Program code is now tiny compared with physical
RAM.



Sure, but data sets are often huge when compared with physical ram. It's
easy to make a PT system thrash if you throw enough tracks onto it.

No matter how fast and large computers get, users always figure out ways to
make them slow.
--scott



"as appealing as it might seem,
it is impossible to patch or upgrade users"
-- Security Warrior


--
Kind regards,
Mogens V.



  #26   Report Post  
Posted to rec.audio.pro
Mogens V. Mogens V. is offline
external usenet poster
 
Posts: 375
Default System Hard Drive RPMs

Laurence Payne wrote:
On 22 Oct 2007 11:53:23 -0400, (Scott Dorsey) wrote:


Does PT have particularly bad design in this area? Every multitrack
program I've worked with is designed to stream data to/from disk as
required. Given ample RAM, the program and/or os can be pretty clever
about caching, cutting down disk activity if you're continually
rolling over a particular section of music. But I don't see how
"thrashing" comes into it?


Ideally what you want to do is cache as much of the data set in memory
as possible. That means you aren't living by the whims of the
(nondeterministic) disk access time. But if you cache _too much_, the
core gets swapped out to disk and then you're back where you started.

Remember, the paging and swapping is done _by the operating system_ without
the application having any control over it. This isn't a realtime system,
this is Windows. So the application has no idea if it's going to make
deadline or not and there's no way it can ask the OS to find out. Consequently
we just throw hardware at the problem and everything is fine for a while.
--scott



Sure. As much AS POSSIBLE. Are you suggesting PT or Windows affords
data caching the same priority as it gives program code? That
whenever a program accesses a data file larger than available RAM,
core will be swapped to disk in the interests of caching every last
possible byte of data? I think not :-)


Oh, Windows will happily swap out parts of the OS. Other OS's does that
too, though some are better than others at handling swapping.
Dunno how (if any) much better Vista may have been designed [shrug].

Are you maybe falling into the common trap of thinking Virtual Memory
= paging file?



--
Kind regards,
Mogens V.

  #27   Report Post  
Posted to rec.audio.pro
Scott Dorsey Scott Dorsey is offline
external usenet poster
 
Posts: 16,853
Default System Hard Drive RPMs

Laurence Payne NOSPAMlpayne1ATdsl.pipex.com wrote:
On 22 Oct 2007 15:04:51 -0400, (Scott Dorsey) wrote:
Are you maybe falling into the common trap of thinking Virtual Memory
= paging file?


The paging file PLUS the physical memory IS the virtual memory space
available. This is early-1970s technology we are talking about here.


Nope. The virtual memory space is always 4GB (or whatever it is on
your os). No "available" or not about it. How much of it is mapped
onto physical resources is another matter.


That's the "address space" of the machine. But plenty of it is not
mapped to a physical resource, and if you attempt to use so much of
it that physical resources are exhausted, you will get a
"PROCESS ABEND-- OUT OF MEMORY" or similar error message. That is,
the available virtual memory space is probably smaller than the full
address space of the computer (although these days it may not be).

You're falsely extrapolating how a system with inadequate physical
memory gets by into how a modern system will behave. Windows isn't
perfecy, but I think it's a lot cleverr than you give it credit for
:-)


No, I'm explaining how virtual memory works. And the PROBLEM is that
Windows is very clever. If you are running realtime applications,
you don't WANT a lot of that cleverness going on.

No matter HOW much physical resource is available, developers and
users will find they want to do something that requires more.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."
  #28   Report Post  
Posted to rec.audio.pro
Peter Larsen[_2_] Peter Larsen[_2_] is offline
external usenet poster
 
Posts: 724
Default System Hard Drive RPMs

Scott Dorsey wrote:

Nope. The virtual memory space is always 4GB (or whatever it is on
your os). No "available" or not about it. How much of it is mapped
onto physical resources is another matter.



That's not how it is.

That's the "address space" of the machine.


Lets limit this to 32 bit Windows. Each process gets a 4 GB address space,
unless a special switch is enabled when booting two of those are for the OS
and the other two for the application.

But plenty of it is not mapped to a physical resource,


You gotta see it the other way around, physical memory - be it disk or ram,
there is - grossly oversimplified and generalized - no way for the
application to know whether it is ram or disk - is mapped to the address
space of the process when it gets its timeslice. The OS will generally
prefer phısical ram, but there is no guarantee - consequently it is possible
for a program that allocates too much of its own cache to end up getting
that cache paged to disk.

and if you attempt to use so much of
it that physical resources are exhausted, you will get a
"PROCESS ABEND-- OUT OF MEMORY" or similar error message.


What is technically referred to as A problem can indeed occur.

You're falsely extrapolating how a system with inadequate physical
memory gets by into how a modern system will behave. Windows isn't
perfecy, but I think it's a lot cleverr than you give it credit for
:-)


No, I'm explaining how virtual memory works. And the PROBLEM is that
Windows is very clever. If you are running realtime applications,
you don't WANT a lot of that cleverness going on.


Yes. Fix the pagefile. If possible on your windows version: fix the cache
size, check the gadgets on the sysinternals pages, now at microsoft.com. Get
pagefile defrag while you are there, it is a must have, but may have been
integrated in Bloatware V 6. ie. Vista, an OS that is designed to actually
use the power of a modern machine and with 5387254.78 new ways of knowing
better than the owner/operator. It seems to be very much designed with
running Office 2007 in mind and not very much with people who need to do
something to humonguous amounts of audio and video data. Reckon I'll have to
read a book or two to come to grips with it.

No matter HOW much physical resource is available, developers and
users will find they want to do something that requires more.


Yes. But investigate border conditions prior to encountering them in a
productivity situation. Filling the OS disk to the brink with temp files is
not advisable, always have space free. I killed a NT4 server - terminal
overwriting of system files - in MCSE training, 4 zipping processes running
concurrently and making large tempfiles on the OS partition did it, the OS
did not react to the disk full situation in time.

In my experience the OS should have a pagefile because it will become too
timid with ram allocation if it hasn't got one that it can dump a few .dll's
to, but it may not be relevant to make it as large as Windows suggests.

--scott



Kind regards

Peter Larsen



  #29   Report Post  
Posted to rec.audio.pro
Mogens V. Mogens V. is offline
external usenet poster
 
Posts: 375
Default System Hard Drive RPMs

Laurence Payne wrote:
On Mon, 22 Oct 2007 09:50:42 -0400, "Sean Conolly"
wrote:


But why on Earth WOULD it be "swapping heavily" (whatever that means?)
Sure, you can sabotage any recording by running another disk-intensive
process behind it. Which is why you don't :-)


Page swaps is one good source of background writes, and you can have a lot
of swap activity well before you out of physical memory (at least on
windows, don't know about mac).



As (again) I've had to say too many times: on a PC with adequate RAM
(and there's no excuse these days for a DAW not to have this)


Sure, ram is cheap, but not all (non-new) mobos are designed to easily
take more than 4GB ram. Of cause, just swap the mobo and cpu and ram..

go into System and completely disable the on-disk paging file. Don't argue it
can't be done, or get tied up in the two usages of the term "virtual
memory". Just do it. Then reboot, and watch your computer perform
precisely as before.


Until ressources are exausted. Pls, don't repeat the 'no excuse for
ram', because someone may actually stack enough synth et al or whatever
to exaust a 4GB system. Yes I deliberately skipped systems with 8-20+GB
ram, to illustrate that your advice is ill adapted for non-experts.

You're absolutely correct that it's fully possible to run without a
pagefile, but this should only be done fully knowing every operating
situation with every intended use, which you don't mention here.

A more useful approach is setting the pagefile to a large _fixed_ size,
so Windblows doesn't spend energy on re-sizing it as it thinks needed,
then use some utility to ensure it's one contigious file and move it to
front of disk.

(A few programs do like to see a paging file on disk, though I don't think


IOW, you don't know.. Which programs, BTW? Do userspace programs
directly use the pagefile? Or do they aquire ressources from the _OS_,
which then use the pagefile as needed (or dreamt up by programmers) ?

Programs don't ask for diskspace to live on; all they can do is use some
version of malloc() to get more memory, which to them is just.. memory..
It's the OS that arbitrates between ram and disk.

they use it for what you think they do. So,
having proved the point, reset it.)



--
Kind regards,
Mogens V.

  #30   Report Post  
Posted to rec.audio.pro
Laurence Payne Laurence Payne is offline
external usenet poster
 
Posts: 2,824
Default System Hard Drive RPMs

On Tue, 23 Oct 2007 11:42:40 +0200, "Mogens V."
wrote:

(A few programs do like to see a paging file on disk, though I don't think


IOW, you don't know.. Which programs, BTW? Do userspace programs
directly use the pagefile? Or do they aquire ressources from the _OS_,
which then use the pagefile as needed (or dreamt up by programmers) ?


The only program I've personally know object to the absence of a
paging file is Photoshop. Apart from that, I'm reluctant to pass on
hearsay, as people are often so muddled (and, for some reason,
passionate:-) on this topic.


  #31   Report Post  
Posted to rec.audio.pro
Mogens V. Mogens V. is offline
external usenet poster
 
Posts: 375
Default System Hard Drive RPMs

D C wrote:
adam79 wrote:

i've recorded like 8 tracks on the system drive. i'm currently using a
macbook pro with a 7200 rpm system drive. i had no problems.



7200 is the rule of thumb I've always heard. I have seen your posts
about getting rid of the Mac Book Pro and getting a Mini. I wouldn't. In
my limited playing with them at the Apple Store, they couldn't get put
of their own way.


While I have neigher a MacBook or a Mini, I haven't been impressed with
the Mini's I've worked on briefly. Fine for smaller home use, though.
Next year I expect replacing my G4 dual 800 for a MacBook, and will have
a 7200rpm system drive with an external firewire-based 3½" drive for
music projects/recording. All I'll care about on that drive is
stability, noise and heat.

There was a discussion about drives/firewire in here about half a year
ago, where (AFAIR) most agreed the for even a fairly large nof tracks,
at least 32, performance didn't matter much on a dedicated drive.

--
Kind regards,
Mogens V.

  #32   Report Post  
Posted to rec.audio.pro
adam79 adam79 is offline
external usenet poster
 
Posts: 308
Default System Hard Drive RPMs

D C wrote:
adam79 wrote:

i've recorded like 8 tracks on the system drive. i'm currently using a
macbook pro with a 7200 rpm system drive. i had no problems.



I was thinking of getting a mini, but after checking out the specs i
realized it didn't have a PCI-e slot or an express 34 slot. I pretty
committed to buying a UAD-1, so I need either one of those slots to
connect the UAD-1. I love the portability of the MacBook Pro, but I
might go for the upgrade to the Mac Pro.. it'll give me more CPU power,
more memory upgrade space, those PCI-e slots.
  #33   Report Post  
Posted to rec.audio.pro
Richard Crowley Richard Crowley is offline
external usenet poster
 
Posts: 4,172
Default System Hard Drive RPMs

"Scott Dorsey" wrote ...
No matter HOW much physical resource is available,
developers and users will find they want to do some-
thing that requires more.


It takes only ~2.5 hours to get from Portland to Seattle if
you "overclock". But there is a constant tension between
us here in Portland trying to make faster and faster CPUs
and those guys up there in Seattle trying to invent more
and more ways to soak up the horsepower doing things
that aren't of a lot of practical use to the user.


  #34   Report Post  
Posted to rec.audio.pro
D C D C is offline
external usenet poster
 
Posts: 34
Default System Hard Drive RPMs

adam79 wrote:

I was thinking of getting a mini, but after checking out the specs i
realized it didn't have a PCI-e slot or an express 34 slot. I pretty
committed to buying a UAD-1, so I need either one of those slots to
connect the UAD-1. I love the portability of the MacBook Pro, but I
might go for the upgrade to the Mac Pro.. it'll give me more CPU power,
more memory upgrade space, those PCI-e slots.



Why do you want the slots, when we have Firewire now?
  #35   Report Post  
Posted to rec.audio.pro
adam79 adam79 is offline
external usenet poster
 
Posts: 308
Default System Hard Drive RPMs

D C wrote:
adam79 wrote:

I was thinking of getting a mini, but after checking out the specs i
realized it didn't have a PCI-e slot or an express 34 slot. I pretty
committed to buying a UAD-1, so I need either one of those slots to
connect the UAD-1. I love the portability of the MacBook Pro, but I
might go for the upgrade to the Mac Pro.. it'll give me more CPU
power, more memory upgrade space, those PCI-e slots.



Why do you want the slots, when we have Firewire now?


UAD-1 needs a faster connection, firewire is too slow.


  #36   Report Post  
Posted to rec.audio.pro
D C D C is offline
external usenet poster
 
Posts: 34
Default System Hard Drive RPMs

adam79 wrote:

Why do you want the slots, when we have Firewire now?


UAD-1 needs a faster connection, firewire is too slow.



Have you recorded anything yet?
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Hard drive for archiving Paul Stamler Pro Audio 15 September 19th 07 04:03 PM
hard drive recording and drive speed Dennis Herrick Pro Audio 11 August 17th 04 07:56 PM
Mp3 Hard drive Big Tymer! Car Audio 0 July 1st 04 06:08 PM
What size hard drive would you get? Healthy Stealthy ; Pro Audio 5 August 29th 03 08:53 AM


All times are GMT +1. The time now is 08:58 AM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"