Hi all,

I'm hoping someone can provide some clarity on this issue for me.

I'm running Linux on a PC104 board. The application requires bit banging of i/o pins to simulate a comm interface.

So I set an i/o pin high - read another i/o pin - wait 1ms - set a different pin high - read an analog channel - wait 1ms - do this/that - wait 5ms ... you get the picture.

All that needs to be done in one sequence - ie. no spurious LENGTHY context switches to other processes (CRT interface for example). My understanding is that by default, the kernel handles task switching / delays / timers on the order of 10ms. That sort of absence from the running program will invalidate the results.

There would be alot of idle time during operation - that is it would be fine to
1- turn pin on
2- read another pin (no more than ~500us later)
3- idle time (program suspended perhaps - don't really care)
4- turn another pin (no more than ~2ms after step 2)
5- ...
The program just couldn't handle random 10ms absences from execution.

If this is pretty much the only process running / loading the CPU (sure in the background their will be vga / keyboard controllers, etc) - might it be safe to assume that it will not be interrupted for lengthy periods (ie. 2ms++) ?

Do I need to go with a real time kernel such as RTLinux ?
But then it seems I would have the joy of rewriting all the device drivers of the manufacturer specific parts for RTLinux - right ?

Or can I mess with SCHED_FIFO, give the process a high priority, and have it interrupt driven off of the real-time clock to guarantee (somewhat) some form of deterministic runtime ?

Or am I totally missing the boat here, overlooking the obvious --?

Any input would be much appreciated..