Reply
 
Thread Tools Display Modes
  #1   Report Post  
Posted to rec.audio.pro
Scott Dorsey Scott Dorsey is offline
external usenet poster
 
Posts: 16,853
Default System Hard Drive RPMs

Laurence Payne NOSPAMlpayne1ATdsl.pipex.com wrote:

Sure. As much AS POSSIBLE. Are you suggesting PT or Windows affords
data caching the same priority as it gives program code?


It's not data caching. It's virtual memory. It's done by Windows, and
the application has no control over it.

That
whenever a program accesses a data file larger than available RAM,
core will be swapped to disk in the interests of caching every last
possible byte of data? I think not :-)


No, you have it backwards. The program has a large address space that is
available for it to use. It is larger than the physical core space on
the computer. When a program goes to access memory that is not currently
swapped in, it swaps a page out to the page file, and swaps in one that
contains the memory block the application wants. In this way, the program
sees a very large address space without the computer actually needing to
have so much space in core.

Are you maybe falling into the common trap of thinking Virtual Memory
= paging file?


The paging file PLUS the physical memory IS the virtual memory space
available. This is early-1970s technology we are talking about here.

Many systems that are designed for realtime applications have very different
memory management, because on realtime operating systems the application
tells the operating system how much time it's willing to spend on each task
and the OS schedules things appropriately so all processes can meet deadline.
That's mid-1980s technology, and you will see it applied in realtime systems
today like pSOS and BeOS.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
  #2   Report Post  
Posted to rec.audio.pro
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default System Hard Drive RPMs


"Scott Dorsey" wrote in message
...
Laurence Payne NOSPAMlpayne1ATdsl.pipex.com wrote:

Sure. As much AS POSSIBLE. Are you suggesting PT or Windows affords
data caching the same priority as it gives program code?


It's not data caching. It's virtual memory. It's done by Windows, and
the application has no control over it.

That
whenever a program accesses a data file larger than available RAM,
core will be swapped to disk in the interests of caching every last
possible byte of data? I think not :-)


No, you have it backwards. The program has a large address space that is
available for it to use. It is larger than the physical core space on
the computer. When a program goes to access memory that is not currently
swapped in, it swaps a page out to the page file, and swaps in one that
contains the memory block the application wants. In this way, the program
sees a very large address space without the computer actually needing to
have so much space in core.

Are you maybe falling into the common trap of thinking Virtual Memory
= paging file?


The paging file PLUS the physical memory IS the virtual memory space
available. This is early-1970s technology we are talking about here.


The virtual memory space available can easily exceed the sum of physical
memory and the paging file. Just because there is address space that is in
some sense available doesn't mean that it has to be backed up with physical
memory.


  #5   Report Post  
Posted to rec.audio.pro
Scott Dorsey Scott Dorsey is offline
external usenet poster
 
Posts: 16,853
Default System Hard Drive RPMs

Laurence Payne NOSPAMlpayne1ATdsl.pipex.com wrote:
On 22 Oct 2007 15:04:51 -0400, (Scott Dorsey) wrote:
Are you maybe falling into the common trap of thinking Virtual Memory
= paging file?


The paging file PLUS the physical memory IS the virtual memory space
available. This is early-1970s technology we are talking about here.


Nope. The virtual memory space is always 4GB (or whatever it is on
your os). No "available" or not about it. How much of it is mapped
onto physical resources is another matter.


That's the "address space" of the machine. But plenty of it is not
mapped to a physical resource, and if you attempt to use so much of
it that physical resources are exhausted, you will get a
"PROCESS ABEND-- OUT OF MEMORY" or similar error message. That is,
the available virtual memory space is probably smaller than the full
address space of the computer (although these days it may not be).

You're falsely extrapolating how a system with inadequate physical
memory gets by into how a modern system will behave. Windows isn't
perfecy, but I think it's a lot cleverr than you give it credit for
:-)


No, I'm explaining how virtual memory works. And the PROBLEM is that
Windows is very clever. If you are running realtime applications,
you don't WANT a lot of that cleverness going on.

No matter HOW much physical resource is available, developers and
users will find they want to do something that requires more.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."


  #6   Report Post  
Posted to rec.audio.pro
Peter Larsen[_2_] Peter Larsen[_2_] is offline
external usenet poster
 
Posts: 724
Default System Hard Drive RPMs

Scott Dorsey wrote:

Nope. The virtual memory space is always 4GB (or whatever it is on
your os). No "available" or not about it. How much of it is mapped
onto physical resources is another matter.



That's not how it is.

That's the "address space" of the machine.


Lets limit this to 32 bit Windows. Each process gets a 4 GB address space,
unless a special switch is enabled when booting two of those are for the OS
and the other two for the application.

But plenty of it is not mapped to a physical resource,


You gotta see it the other way around, physical memory - be it disk or ram,
there is - grossly oversimplified and generalized - no way for the
application to know whether it is ram or disk - is mapped to the address
space of the process when it gets its timeslice. The OS will generally
prefer phısical ram, but there is no guarantee - consequently it is possible
for a program that allocates too much of its own cache to end up getting
that cache paged to disk.

and if you attempt to use so much of
it that physical resources are exhausted, you will get a
"PROCESS ABEND-- OUT OF MEMORY" or similar error message.


What is technically referred to as A problem can indeed occur.

You're falsely extrapolating how a system with inadequate physical
memory gets by into how a modern system will behave. Windows isn't
perfecy, but I think it's a lot cleverr than you give it credit for
:-)


No, I'm explaining how virtual memory works. And the PROBLEM is that
Windows is very clever. If you are running realtime applications,
you don't WANT a lot of that cleverness going on.


Yes. Fix the pagefile. If possible on your windows version: fix the cache
size, check the gadgets on the sysinternals pages, now at microsoft.com. Get
pagefile defrag while you are there, it is a must have, but may have been
integrated in Bloatware V 6. ie. Vista, an OS that is designed to actually
use the power of a modern machine and with 5387254.78 new ways of knowing
better than the owner/operator. It seems to be very much designed with
running Office 2007 in mind and not very much with people who need to do
something to humonguous amounts of audio and video data. Reckon I'll have to
read a book or two to come to grips with it.

No matter HOW much physical resource is available, developers and
users will find they want to do something that requires more.


Yes. But investigate border conditions prior to encountering them in a
productivity situation. Filling the OS disk to the brink with temp files is
not advisable, always have space free. I killed a NT4 server - terminal
overwriting of system files - in MCSE training, 4 zipping processes running
concurrently and making large tempfiles on the OS partition did it, the OS
did not react to the disk full situation in time.

In my experience the OS should have a pagefile because it will become too
timid with ram allocation if it hasn't got one that it can dump a few .dll's
to, but it may not be relevant to make it as large as Windows suggests.

--scott



Kind regards

Peter Larsen



  #7   Report Post  
Posted to rec.audio.pro
Richard Crowley Richard Crowley is offline
external usenet poster
 
Posts: 4,172
Default System Hard Drive RPMs

"Scott Dorsey" wrote ...
No matter HOW much physical resource is available,
developers and users will find they want to do some-
thing that requires more.


It takes only ~2.5 hours to get from Portland to Seattle if
you "overclock". But there is a constant tension between
us here in Portland trying to make faster and faster CPUs
and those guys up there in Seattle trying to invent more
and more ways to soak up the horsepower doing things
that aren't of a lot of practical use to the user.


Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Hard drive for archiving Paul Stamler Pro Audio 15 September 19th 07 04:03 PM
hard drive recording and drive speed Dennis Herrick Pro Audio 11 August 17th 04 07:56 PM
Mp3 Hard drive Big Tymer! Car Audio 0 July 1st 04 06:08 PM
What size hard drive would you get? Healthy Stealthy ; Pro Audio 5 August 29th 03 08:53 AM


All times are GMT +1. The time now is 11:32 AM.

Powered by: vBulletin
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright ©2004-2025 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"