Find the answer to your Linux question:
Results 1 to 3 of 3
So new to Linux Forums, so hopefully this goes in the correct place. I have a Fedora 13 64-bit machine which I have wrote a program on. I know that ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Apr 2012
    Location
    Laramie WY
    Posts
    6

    Question on page_sync


    So new to Linux Forums, so hopefully this goes in the correct place.

    I have a Fedora 13 64-bit machine which I have wrote a program on. I know that version of Fedora is outdated, but I don't have a say in that matter right now.

    I have written a code (Fortran 95, may convert to C/C++ soon) which performs a correlation study, however it's not a very complicated code, and basically just does a bunch of math operations and reads from files. The problem is that I had this same code working optimally 6 months ago and it would read 6000 files that are approximately 400k in size in about 10 minutes and write the file. Now that I need to revisit the code, running the same exact case, this code takes over 2 hours to complete and what I find out is that it seems to be encountering page_sync which severely slows the program down.

    Can somebody please explain what page_sync is actually? I have googled this and have not found an understandable solution for myself. At first I though maybe I forgot to close a file after reading, but that's not the case, and then I thought maybe I'd used all my RAM. That's not the case, therefore I am stumped on why it has gotten severely slow. I don't believe I have even done a Linux kernel update since the code was written the first time.

    I'd be happy to provide any information people want, but I cannot provide the files being read from.

    Another thing, the program has a tendency to start very fast and does approximately 500 files in about 5 seconds or so, the after that, the rate drops to about 3-5 per second. Appreciate all the help!

  2. #2
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,691
    Normally Linux uses a "write-behind" disc output algorithm. IE, your program writes to disc, and it caches the data and only writes it later when there are cycles to do so, in order to optimize system performance. Depending upon how the output disc is mounted, it may be set to sync on each write, greatly slowing down your performance by eliminating the write-behind capability. I have done some experimentation with this, and the difference between sync and async settings on a disc mount are very great indeed, and can easily account for the 10x performance difference.

    Ok. After re-reading your post, I notice you mention that it starts fast, and then slows down. That indicates to me that your disc DOES use write-behind caching, but it is hitting the size limit of the cache (what does the "free" command show when you hit the limit?). At that point, the disc is writing a LOT of data, and this is blocks output until there is cache available for your application to write into. A good cause of this is if the output file is not contiguous. When you were running the application originally, the disc may have been relatively new, hence had a lot of contiguous space for streaming (not random) writes. Now, it may have a lot of files taking up space, and the amount of contiguous space is limited, requiring a lot of seeking on each write, which again gives a SERIOUS slowdown in output performance. This is why database systems pre-allocate space, and when they need to grow their table spaces, they do so in pretty big chunks. This is a major part of tuning Oracle for example, so you will continue to get good performance, even as other stuff is taking up space on the disc.
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

  3. #3
    Just Joined!
    Join Date
    Apr 2012
    Location
    Laramie WY
    Posts
    6
    Rubberman,
    Thanks for the response. Sorry it has taken me so long to get back. I have too many projects going on. I have went back into looking at the source code and while I haven't yet tried "free" while its been running, I have been wondering why it get's slow when all I am doing withing that segment of code is reading from the disc. There is no writing occurring when the slowdown happens. If all it is doing is reading then there shouldn't be a problem of caching should there? And when I say there is no writing there is no writing going on. I print 1 line to the screen each iteration as a tracking source, but that's it. and let's just say that it's not a complicated print to screen either. At least it used to work fast. So I feel that writing to the disc is not the problem. I'm kind of sunk on this I believe, I just cannot come up with a solution. I felt I was pretty conservative in writing the code, since I mostly do hefty computations, but Fortran does have it's limits on reading in files in chunks. Again there is no writing to disc during the iteration, but there is reading 1 file per iteration. So if you have any more ideas, they are appreciated!

    I do however like the idea of having lots of contiguous space available and for the next set up of drives I'll probably have a dedicate writing partition that can be reformatted and I just use for writing to disc on a large portion, then move the files when appropriate.

  4. $spacer_open
    $spacer_close

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •