Results 1 to 4 of 4
Thread: autodeletion from tmp?
Enjoy an ad free experience by logging in. Not a member yet? Register.
- Join Date
- Jul 2010
autodeletion from tmp?
I have a centos 5.2 instance on Amazon thats been running without a restart for nearly 500 days.
Theres only one user - that's me - I'm root.
I put in the /tmp folder a couple of subfolders.
Didn't use the folders for about 6 months for anything and hardly touched the machine. When a webservice tried to generate a file in the subfolder under /tmp recently it failed. The folder was missing - so was another.
Nobody deleted the folders.
Is there something which cleans stuff up in /tmp automatically?
I've never heard of this. I know with some os's (I think Solaris) if you reboot then /tmp is cleared.
Its a bit of a mystery.
There is a cronjob that cleans up
Look at /etc/cron.daily/tmpwatch.
And quite frankly..
tmp = temporary = not permanent
Actually, I create /tmp on my servers really small, like 10Mbyte.
So my DBAs and devs dont even try to (ab)use tmpYou must always face the curtain with a bow.
- Join Date
- Jul 2010
Thanks - mystery solved then.
I *had* webservice write a temp tex file, compile to pdf and push it to the www - thats what I was using it for. It's got its own spot elsewhere now.
So, just out of interest (I am genuinely puzzled here), the business about the 10mb tmp folder.... is it that you are trying to encourage better coding practice or are you programming flash based embedded systems?
Commercial stuff I install on unix demands (and states in the support notes) tmp must be e.g. 600MB.
Is it preferred that you set up a new temp TMP in the environment if you have to do stuff like that or do you just never need tmp bigger than 10M?
Also personal accounts have to 'su' to system accounts so winscp via ssh to the unix/linux means transferring files as self to "somewhere" - then ssh to the box and su to system user and mv the file and new perms to proper location.
Maybe its a case by case thing but I'm certain I couldn't live with one that small.
To explain the 10MByte: It is educational and also for the health of the system.
We write our web applications for internal use and one that is quite exposed. aka: a bigger public website.
As always, the sysadmins (I am one) are architects in the beginning of a project and janitors after it goes live.
So, to lower the maintenance work, I require our applications to be contained, defined and documented.
If the devs need a working directory, they should set one as a key in a config file.
Same for data directories, db connection strings, etc.
No hardcoded hostnames, directories, filenames, feature switches,etc.
Because then the sysadmin has the chance to e.g. shuffle the IO load to different disks by just editing the conf file.
Also reinstallation or migration is easier. Anything the app needs is defined in the conf file.
From an operations point of view:
I want to avoid things like: "Oh, I have that 500GByte db dump, I need it on server X. Let´s just copy it to /tmp"
Because db dumps are sensitive data. And sensitive data doesnt belong in a directory, that can be read by anyone.
Also, guess who would be responsible to clean out tmp afterwards?
For the commercial software you mention.
Still bad design imho, but that is then something I cannot control.
For the installation purpose only, it is quite easy to use a bigger /tmp temporarily.
If the software needs /tmp permanently, I would create the machine´s partitions to accomodate that.
Last edited by Irithori; 08-17-2010 at 07:59 AM.You must always face the curtain with a bow.