Find the answer to your Linux question:
Results 1 to 3 of 3
Hi All, Currently working on speeding up my company's database server, some mysql inserts are taking over a second during peak times, which is about 500 times longer than they ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Nov 2009
    Posts
    5

    Trying to track down mysql tables responsible for disk access spikes


    Hi All,

    Currently working on speeding up my company's database server, some mysql inserts are taking over a second during peak times, which is about 500 times longer than they usually do. Been logging statistics of cpu load, ram and block io, seems only thing regularly causing a bottleneck in the disk access, which i suppose is to be expected.

    In order to combat this problem what I'd ideally like to do is find out which tables are responsible for the most disk reading and writing, our system is incredibly poorly written and I suspect there will be a lot of places I can speed things up using memory tables, memcache, or just rewriting how that part of the system works.

    I'm sure I've seen a linux command somewhere that would allow me to log the read and write stats to all open files at a particular time, it was something like lsof but it also had info on how much data was queued to be written or read from each file. If anyone can tell me what that might have been, or anything similar that would be very useful, as I could just apply it to the mysql table files.

    Failing that I imagine there are other ways of doing this that I'm overlooking, perhaps even in mysql itself. Any suggestions would be welcome!

    Thanks

  2. #2
    Trusted Penguin
    Join Date
    May 2011
    Posts
    4,353
    iostat and vmstat?

  3. #3
    Just Joined!
    Join Date
    Nov 2009
    Posts
    5
    Thanks for the suggestion, as far as I was aware those commands only give info at the per partition level tho, I'd need something that can tell me stats per file, unless there's some way of using them I've missed?

    Am toying with the idea of taking a snapshot of queries over a minute with the query log, then parsing it to see how much data is being read and written to each table in each query. Seems excessive though, and might only be able to get it vaguely accurate.

    Just feels like there mst be a better way of doing it in linux, any ideas???

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •