Find the answer to your Linux question:
Results 1 to 10 of 10
I have spent a few days searching online and haven't been able to find a solution to the problem I have. I'm usually pretty good at finding the answers or ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Nov 2010
    Location
    Canada
    Posts
    9

    Bash wait function problem while using pipes


    I have spent a few days searching online and haven't been able to find a solution to the problem I have. I'm usually pretty good at finding the answers or figuring it out on my own so that I don't waste others' time (or patience).

    My program basically takes a given url and downloads it while providing the user with the status of the download. The main problem lies in having the script properly cancel the download if the user selects 'cancel' once the download has begun.

    Basically, I am trying to have wget download something and have the output of that piped to sed to translate the progress info which is piped to zenity. Then wait for zenity to finish (or be canceled) and check if wget is still running. (see below)

    Code:
    #download $url and show user its progress
    wget $url 2>&1 | sed -u 's/.* \([0-9]\+%\)\ \+\([0-9.]\+.\) \(.*\)/\1\n# Downloading at \2\/s, ETA \3/' | zenity --progress --title="Downloading: $fileName" --width="500" --auto-close &
    
    wgetPID=$(pidof wget) #grab wget's PID to kill it if needed later
    zenityPID=$(pidof zenity) #grab zenity's PID to monitor if user cancels download
    
    wait $zenityPID #wait ONLY for zenity to finish/cancel
    
    #check if wget is still running...
    if [ -n "$(pidof wget)" ];then
        kill -9 $wgetPID >/dev/null #send KILL signal to process 'wget'
    fi
    The problem I have is that even though I can get the correct PIDs of wget and zenity, for some reason the wait function continues to wait for wget to finish even after the user has pressed 'cancel' to close zenity.

    I can get wait to work properly if I am running different commands/processes but I think my problem might stem from the fact that I am piping the processes together and therefore, wait sees them as linked processes and therefore waits for them all to finish.

    I have a workaround for this problem but it involves using a loop with sleep to periodically check the status of zenity and I'd rather not use up CPU time if possible with a loop & sleep task.

    If anyone can help, or at least confirm my suspicion of the problem it would be greatly appreciated. Thanks for any help that you can provide.

  2. #2
    Linux Newbie tetsujin's Avatar
    Join Date
    Oct 2008
    Posts
    117
    When I run zenity and hit cancel, it issues a HUP signal, which kills my shell. When I try a script similar to yours, commands after that "wait" are never run, because the shell (running the shell script) is gone. wget ignores the signal and finishes its job.

    I don't know if this is a bug or a feature... But you can handle it by using the "trap" command:

    Code:
    wget ...  | zenity --progress &
    wgetPID=$(...)
    trap "kill -9 $wgetPID >/dev/null" HUP
    I don't know what the deal is with wait - I haven't had much success with it and zenity.

  3. #3
    Just Joined!
    Join Date
    Nov 2010
    Location
    Canada
    Posts
    9
    Thanks for the suggestion. I appreciate the response. I haven't really used the trap command so I didn't even think of it.

    I did as you suggested and still wasn't able to get my script to catch the signal from zenity and proceed to kill wget. I even tried using the 'ERR' signal which should respond to everything except for a normal exit and still no luck.

    In your experience using trap, did you use a loop to continue to check or sleep while waiting for the signal? I've look around online and that seems to be a common method for waiting for trap conditions: Basically set the trap, and loop until the trap is sprung. (Sorry for the bad pun. It felt unavoidable).

    If a loop is what is needed to use trap, then I believe my current workaround is just as effective without the need to invoke trap. Trap makes you have to worry about any other signals from processes I've started and, at this point, I'm doing only basic error handling. I don't really want to start having to worry about invoking trap and possibly missing the handling of it (i.e. invoking it for one process but having it trip the second).

    If I've missed the point of your post, or implemented it incorrectly, feel free to correct/inform me of my error.

    Just for additional info and clarification: In any case either by my original method or attempting the suggested solution above, if I cancel zenity, the dialog closes and stops the process but wget continues until it's done downloading. The wait function also continues to wait for wget to finish.

    I did try the suggested code both using and not using the wait function but both had the same result.
    Last edited by ForeverACE; 12-01-2010 at 09:35 AM. Reason: Spelling and adding additional information

  4. $spacer_open
    $spacer_close
  5. #4
    Linux Newbie tetsujin's Avatar
    Join Date
    Oct 2008
    Posts
    117
    Quote Originally Posted by ForeverACE View Post
    If a loop is what is needed to use trap, then I believe my current workaround is just as effective without the need to invoke trap. Trap makes you have to worry about any other signals from processes I've started and, at this point, I'm doing only basic error handling. I don't really want to start having to worry about invoking trap and possibly missing the handling of it (i.e. invoking it for one process but having it trip the second).
    Well, the main difference is that if you're using trap, the loop can sleep... If you were using a busy-wait loop to watch the lifetime of zenity, you'd want to avoid sleeping so you could kill wget promptly when zenity goes away. But trap will respond as soon as zenity generates that SIGHUP.

    I agree it's not an ideal solution... Additionally, I tried zenity on another machine (different revision of zenity as well, I assume) and it doesn't generate the HUP... so I guess it was a "bug" and not a "feature"... Ideally I think going with "wait" would be the way to go, but I don't know how to get it to work with zenity... So how about this solution?

    Code:
    #!/bin/bash
    
    FIFO=/tmp/pid$$.fifo
    rm -rf $FIFO
    mkfifo $FIFO
    
    #download URL ($1) and show user its progress
    wget $1 2>&1 | sed -u 's/.* \([0-9]\+%\)\ \+\([0-9.]\+.\) \(.*\)/\1\n# Downloading at \2\/s, ETA \3/' > $FIFO &
    zenity --progress --title="Downloading: $1" --width="500" --auto-close < $FIFO
    
    kill -9 %1   # Kill background job 1 for this shell...
    #The shell will generate output indicating that it's killed the job: I think there's a shell option to disable that...
    
    rm -f $FIFO
    The downside is that you have to create a named FIFO on disk. And it doesn't distinguish between "cancel" and "window close" on the zenity window. But it also won't kill other "wget" jobs found by "pidof", and (apart from the named pipe) it's pretty straightforward: zenity runs as a foreground process and wget is killed when zenity is finished.
    Last edited by tetsujin; 12-01-2010 at 07:01 PM.

  6. #5
    Just Joined!
    Join Date
    Nov 2010
    Location
    Canada
    Posts
    9
    tetsujin, thanks for your help and suggestions. I appreciate your time and effort.

    I did toy with the idea of using a token file that I could use to check the status of my script but ruled against it because I wanted to keep everything as self-contained as possible without having to create, read and/or delete an external file. I definitely have had to do this in the past but I was hoping for a different method (if possible).

    I hope you don't feel like I keep dismissing your suggestions because each one has made me realize shortcomings in my own script. For instance: I hadn't even considered 'pidof' might accidentally grab the wrong process. I believe I was looking at the man pages for both 'pidof' and 'wait' at the same time and got a little overzealous in implementing the 'wait' function.

    Your note made me realize that I can use my existing workaround (which uses a sleep call) by just using 'pgrep' instead of 'pidof'. That way, I can guarantee that I am only getting PIDs that my script started and no others. If I'm already using a sleep call and a loop, I might as well check directly if my process is running rather than using trap (although, maybe trap uses less resources?). (See my method below).

    Just FYI, I like to keep things as easily readable as possible and that's why there's a variable called 'empty'. It just makes things easier for me when debugging because I know that if I'm looking for a empty variable and it doesn't work, then I made some kind of type-o.

    Code:
    #download $url and show user its progress
    wget $url 2>&1 | sed -u 's/.* \([0-9]\+%\)\ \+\([0-9.]\+.\) \(.*\)/\1\n# Downloading at \2\/s, ETA \3/' | zenity --progress --title="Downloading: $fileName" --width="500" --auto-close &
    
    #Start a loop testing if zenity is running, and if not, kill wget
    wgetRunning=0 #still running in background? Default is yes (0)
    
    #using this script's PID, get the child PIDs of wget
    wgetPID=$(pgrep -x -P $myScriptPID wget)
    
    #start loop to check if zenity is still running
    while [ $wgetRunning -eq 0 ]
    do
    	sleep 1 #give CPU time to other processes
    
    	#check if zenity still running
    	if [ "$(pgrep -x -P $myScriptPID zenity)" = "$empty" ];then
    	#zenity is done/canceled
    		
    		#check if wget is still running
    		#alternately, I could use $wgetPID)
    		if [ "$(pgrep -x -P $myScriptPID wget)" != "$empty" ];then
    
    			#wget is still running, but zenity isn't
    			kill -9 $wgetPID >/dev/null #send kill SIG to process wget
    			wgetRunning=1 #wget should be killed now, stop loop
    		else
    			#zenity or wget aren't running or are done
    			wgetRunning=1 #stop the loop
    		fi
    	fi
    done
    With the above code I was able to start 2 different downloads (launching the script twice), cancel one while the other worked and restart the download without any problems. I would still like to know why I can't seem to get 'wait' to work with the above but I guess this will have to do.

    Thanks again for the help.

    BTW, interesting note about SIGHUP not being generated on another system. I guess I sometimes assume too quickly that a script will act the same on other *nix systems.

  7. #6
    Linux Newbie tetsujin's Avatar
    Join Date
    Oct 2008
    Posts
    117
    Quote Originally Posted by ForeverACE View Post
    tetsujin, thanks for your help and suggestions. I appreciate your time and effort.

    I did toy with the idea of using a token file that I could use to check the status of my script but ruled against it because I wanted to keep everything as self-contained as possible without having to create, read and/or delete an external file. I definitely have had to do this in the past but I was hoping for a different method (if possible).
    I agree it'd be nice to not have to create any files... Though if it makes you feel any better, it's not a real file, it's a pipe (the same as you'd create with the pipe() function, same the shell creates when you pipe two processes together) - the only difference is that it has a filename. The output of sed isn't going to disk, it's going to a FIFO in memory, just as with a direct pipeline syntax.

    In principle you should be able to do this sort of thing in a Unix shell without creating a filename for the fifo - create a pipe and assign its two ends to numeric file descriptors within the shell... In practice I think most shells can't do that, and working with additional descriptors is something people mostly just avoid...

    Korn shell has an interesting feature called "coprocesses": basically it lets you run a background job and connect its input and output to a special identifier in ksh, which you can then attach as the input and/or output of another process. Basically:

    Code:
    CMD |&     #starts a coproc, which runs as a background job
    read -p varname    #reads in a line of text from the coproc's output
    cmd2 <&p    #spawn cmd2 with its input connected to coproc's output
    You can also do things like redirect an already-running coproc's input/output to numeric file descriptors, which frees you up to start another coproc. (The coproc is accessed with special syntax which doesn't allow for more than one.)

    Coprocs are normally expected to follow certain rules about how they handle input: specifically, coprocs need to either write their output unbuffered, or flush their output regularly enough that the caller can pull a line of data out when they feel like it... For instance, one common type of coproc would take in a line of input, process it, and return a line of input. But by default, stdio is buffered unless it's connected to a TTY (except for stderr, I believe) - so for a program to work properly as a coproc filter, it would need to flush its output buffers after each line.
    (It seems like a complicated problem from the shell's point of view... There's no way for the shell to make the coproc handle its output in a way that's friendly to coproc usage... and coproc is a feature that's not used much I think, so
    Code:
    #!/bin/ksh
    perl -pe 's/a-zA-Z/n-za-m/N-ZA-M/;' |&     # Coproc to do rot13!  Except...
    
    echo "secret text" >&p    #feed input into the coproc and...
    read -p answer      #This blocks!  perl is buffering its output: we can't read a line of output until it flushes its buffers!
    
    exec 3>&p 3>&-   # dup()'s coproc output to file descriptor 3 and then closes it.  Coproc then dies via SIGPIPE next time it tries to write to output...
    kill $!   # kill last background job - the coproc 
    
    sed -u -e 's/foo/bar/gi;' |&
    #This one works...  sed's -u option makes it coproc-friendly.
    in the script below, the coproc doesn't use input, so we don't need to worry, and we can just use the coproc as a handy way to create an unnamed pipe inside the shell:

    Code:
    #!/bin/ksh
    
    #download $url and show user its progress
    wget $1 2>&1 | sed -u 's/.* \([0-9]\+%\)\ \+\([0-9.]\+.\) \(.*\)/\1\n# Downloading at \2\/s, ETA \3/' |&
    wgetjob=$!
    
    zenity --progress --title="Downloading: $1" --width="500" --auto-close <&p
    
    # zenity has terminated, kill the coproc.
    wgetpid=$(pgrep -x -P $wgetjob wget)
    if [ -n "$wgetpid" ]; then
        kill $wgetpid
    fi
    I've tested this with ksh93 and mksh (mksh is a derivative of the public domain korn shell, and seems to be the version of choice if you want interactive command history and so on...)

    This version was a bit tricky: for some reason, killing the coprocess job ($!) didn't kill off the processes spawned as part of the job... I'm not sure what that's about. So to get the PID for wget, I use $! to get the job PID and then pgrep its child processes to get wget. For some other reason, using pgrep to find the PID of wget with the first ksh instance's PID as a parent also did not work. (I guess the coproc isn't considered a child process of the main shell? I haven't looked at the process tree...)

    I hope you don't feel like I keep dismissing your suggestions because each one has made me realize shortcomings in my own script.
    It's cool. One of my interests is trying to improve upon the Unix shell. That also means educating myself quite a lot about how it works in practice and how it's used. Fielding questions like this is a pretty good way to do that.

    BTW, interesting note about SIGHUP not being generated on another system. I guess I sometimes assume too quickly that a script will act the same on other *nix systems.
    I'm guessing it's just different behavior in different versions of zenity. I don't know if that behavior was classified as a bug, and they fixed it, or what - but I think the difference is in zenity itself.
    Last edited by tetsujin; 12-02-2010 at 07:16 PM.

  8. #7
    Just Joined!
    Join Date
    Nov 2010
    Location
    Canada
    Posts
    9
    Quote Originally Posted by tetsujin View Post
    I agree it'd be nice to not have to create any files... Though if it makes you feel any better, it's not a real file, it's a pipe (the same as you'd create with the pipe() function, same the shell creates when you pipe two processes together) - the only difference is that it has a filename. The output of sed isn't going to disk, it's going to a FIFO in memory, just as with a direct pipeline syntax.
    I did actually know that, I guess I should be a little careful with the terminology I use. I get to used to dumbing stuff down for others so they can understand that I sometimes use incorrect terminology.
    Quote Originally Posted by tetsujin View Post
    ...and working with additional descriptors is something people mostly just avoid...
    I definitely do. It's not that I think they're bad because I actually think they're a very powerful way of monitoring processes; I just don't do a lot of scripting so when I get back to it, I need to quickly refresh myself on what's going on in my code and I find it easier to stay away from it. Hence why I have a variable called empty. It actually wastes memory and you take a very slight performance hit but it makes it really easy to see what's going on with a glance of the code.
    Quote Originally Posted by tetsujin View Post
    Korn shell has an interesting feature called "coprocesses": basically it lets you run a background job and connect its input and output to a special identifier in ksh, which you can then attach as the input and/or output of another process.
    I didn't know that at all. I still have so much to learn. I've only been using linux for 5 years now and only scripting in it for the last 2. I've been trying to slowly learn the ins and outs but there's like 50 years of architectural history (including Unix) that I think a lot of people take for granted 'cause they've been using it for so long. They just know that there's a certain command that you should use over another in a given circumstance. I don't have that experience or knowledge, so I have to do some research and some trail and error.

    I'm still learning how to fully use bash, but I'd like to get into some of the other shell languages as well.
    Quote Originally Posted by tetsujin View Post
    Code:
    #!/bin/ksh
    
    #download $url and show user its progress
    wget $1 2>&1 | sed -u 's/.* \([0-9]\+%\)\ \+\([0-9.]\+.\) \(.*\)/\1\n# Downloading at \2\/s, ETA \3/' |&
    wgetjob=$!
    
    zenity --progress --title="Downloading: $1" --width="500" --auto-close <&p
    
    # zenity has terminated, kill the coproc.
    wgetpid=$(pgrep -x -P $wgetjob wget)
    if [ -n "$wgetpid" ]; then
        kill $wgetpid
    fi
    What I found on my PC was that when I ran a few things piped to each other then they would have sequential PIDs but whatever the last piped command was would be grabbed by $!. So, in your example above, I would get sed's PID instead of wget. I'm not sure if this is system specific or bash but every time I ran my script with 'wget | sed | zenity', the PIDs would be something like: 4623, 4624, 4625 (in the same order).
    Quote Originally Posted by tetsujin View Post
    I use $! to get the job PID and then pgrep its child processes to get wget. For some other reason, using pgrep to find the PID of wget with the first ksh instance's PID as a parent also did not work. (I guess the coproc isn't considered a child process of the main shell? I haven't looked at the process tree...)
    Damned if I know. I do vaguely remember something similar in school about UNIX having something like that. I'm not sure if it is the same but I remember having issues with thread and process management.
    Quote Originally Posted by tetsujin View Post
    One of my interests is trying to improve upon the Unix shell. That also means educating myself quite a lot about how it works in practice and how it's used. Fielding questions like this is a pretty good way to do that.
    Well, I appreciate you help. You've obviously put a lot of time and effort into your answers and it has been a great help (an also an education). I'm not technically a programmer. I've had to learn as I go because of various jobs that I've had (or personal projects for my PC) and therefore am pretty raw when it comes to coding. I had one semester of QBasic back in the day, one of C (for hardware), 8088 assembly, a handful of microcontroller assembly languages, and one network hardware/software layer stack course. It's all pretty foggy.

    I'd like to think I'm a decent programmer but I know there are a lot of better ways of doing things that I don't even know exist. Constantly learning.

    tetsujin, thanks again!

  9. #8
    Linux Newbie tetsujin's Avatar
    Join Date
    Oct 2008
    Posts
    117
    (re: named pipes)

    Quote Originally Posted by ForeverACE View Post
    I did actually know that, I guess I should be a little careful with the terminology I use. I get to used to dumbing stuff down for others so they can understand that I sometimes use incorrect terminology.
    Ah, OK - sorry for explaining something you already knew, then. I certainly agree with your feeling that creating an unnecessary directory entry for a pipe that's internal to a single shell script is pretty unappealing...

    (re: avoiding the use of file descriptor redirects)
    I definitely do. It's not that I think they're bad because I actually think they're a very powerful way of monitoring processes; I just don't do a lot of scripting so when I get back to it, I need to quickly refresh myself on what's going on in my code and I find it easier to stay away from it.
    Personally I feel like the whole feature is a bit of a mess. The syntax only allows for the use of ten different file descriptors (actually, I think they may have extended the syntax to allow for more), and three of those already have special meaning... Users using the syntax have to choose what fd number to use at any given time, and the fd's are global to a shell instance and passed to any processes spawned by the shell. (Compare that to environment variables, which you can choose to export or not, and there's scoping, and a wide enough selection of names that you can choose one in one bit of code and, even if it were global, be pretty confident it wouldn't conflict with something else...) And even though you could pass an open file descriptor to a program - in practice there are very few programs out there that will interface with the shell that way...

    (re: ksh and coprocesses)
    I didn't know that at all. I still have so much to learn. I've only been using linux for 5 years now and only scripting in it for the last 2. I've been trying to slowly learn the ins and outs but there's like 50 years of architectural history (including Unix) that I think a lot of people take for granted 'cause they've been using it for so long. They just know that there's a certain command that you should use over another in a given circumstance.
    I think there's a lot that most Linux users don't know about the system - certainly there was a lot I had to learn when I first picked up "Advanced Programming in the Unix Environment" (great book if you're interested!)

    ksh is kind of an odd case. It was proprietary software until 2000, and bash and tcsh were already well entrenched on free systems by that point. I think that hampered adoption of ksh and the features that came with it. That, combined with the fact that coprocs are a little difficult to use properly in the first place (due to the buffering issue) is probably why they haven't been adopted in most other shells. (zsh is the main exception, I guess - it includes some ksh features, including coprocs) I always thought it was a great feature, though, I just wish the implementation could have been a bit better.

    Most of the various shells are pretty similar to each other for the basic stuff, usually it's just the "extra" features that will differ in big ways...

    What I found on my PC was that when I ran a few things piped to each other then they would have sequential PIDs but whatever the last piped command was would be grabbed by $!. So, in your example above, I would get sed's PID instead of wget. I'm not sure if this is system specific or bash but every time I ran my script with 'wget | sed | zenity', the PIDs would be something like: 4623, 4624, 4625 (in the same order).
    Hm, didn't even think of that. Yeah, when the shell kicks off a job, the order in which it creates those processes, and their hierarchy, is an implementation detail, it varies from shell to shell. In the case of the versions of ksh I was using, it seems that the coproc jobs were also getting a forked instance of ksh which was the pid returned by $! in the script. zsh's coproc implementation doesn't do that, and it's conceivable that this policy could change in some version of ksh as well - so the bit of my last script where I find the PIDs could probably do with some improvement.

    I'd like to think I'm a decent programmer but I know there are a lot of better ways of doing things that I don't even know exist. Constantly learning.
    You know, before this thread I didn't even know what zenity was or how to use it at all...

    And I STILL don't know why wait didn't work! XD

  10. #9
    Just Joined!
    Join Date
    Nov 2010
    Location
    Canada
    Posts
    9
    RE: Named Pipes
    Quote Originally Posted by tetsujin View Post
    Ah, OK - sorry for explaining something you already knew ...
    I wasn't intending my statement as any kind of knock against you or that I was upset by your comment. It's always hard to know what someone knows and doesn't and you can't help them unless you know that they've looked at everything. No harm, no foul.

    I'm sure you've had the experience of spending a bunch of time helping someone out just to find it was some stupid problem that you assumed they would have tried first. Plus, I have a tendency to be direct and it sometimes can come across as dickish. I guess that's the problem of text, you can't see my face to catch my intent and I feel too old to use a $h1t load of emoticons. =P

    Quote Originally Posted by tetsujin View Post
    I think there's a lot that most Linux users don't know about the system - certainly there was a lot I had to learn when I first picked up "Advanced Programming in the Unix Environment" (great book if you're interested!)
    That's the problem I always find is there's just too much to learn and not enough time. I'm in the hardware technology field and I find that if you don't pay attention for six months, you're out of the game. Add on top of that all of the different architectural changes and differences between PCs, embedded systems, OSes, programming languages,... ARGHHHH! Information overload! But, I will definitely add 'Advanced Programming in the Unix Environment' to my ever growing list of books to read. Thanks for the suggestion (no sarcasm intended).

    RE: Ksh and Coprocesses
    Quote Originally Posted by tetsujin View Post
    ksh is kind of an odd case. It was proprietary software until 2000, and bash and tcsh were already well entrenched on free systems by that point. I think that hampered adoption of ksh and the features that came with it. That, combined with the fact that coprocs are a little difficult to use properly in the first place (due to the buffering issue) is probably why they haven't been adopted in most other shells...
    When I first moved to linux, I looked into what type of scripting language I should dive into. I ended up choosing bash for exactly that reason. It seemed to be the tried, tested and true choice for most *nix users/programmers. It's probably a lot like C/C++. There are so many other languages out there but there are a dedicated core who continue to use C++. I figured I would start with the most common language so if I got stuck, I would have a better hope of finding answers with a simple search. I figured I would try to branch out once I got a better understanding of the basics.

    RE: Zenity
    Quote Originally Posted by tetsujin View Post
    You know, before this thread I didn't even know what zenity was or how to use it at all...
    I've grown to love it. I'm partial to using my mouse so any way of minimizing typing and making things look pretty makes me happy. Plus, a well thought out script, depending on its intended purpose, can offer the user options so there's barely anything to input. For example, the script that I'm working on was intended so I could download any video link from the internet and supply it with a 'user agent', preset or custom string, if needed and a quick directory selection dialog for its destination.

    Basically, I have a movie collection database and I like to have the movie trailers for each of them so if you click on the picture of the movie, the trailer plays. The problem I kept having was the apple only allows their quicktime player to access the videos. I know I can get a plugin to get Firefox to work but then I have to track down the cached file, rename it, move it, etc.. I just figured I would save myself the work.

    RE: Wait function
    Quote Originally Posted by tetsujin View Post
    And I STILL don't know why wait didn't work! XD
    Me either! Oh well, I guess it wasn't meant to be. Who knows, maybe I'll eventually find out why. In the meantime, I appreciate the info and help.

  11. #10
    Linux Newbie tetsujin's Avatar
    Join Date
    Oct 2008
    Posts
    117
    Quote Originally Posted by ForeverACE View Post
    When I first moved to linux, I looked into what type of scripting language I should dive into. I ended up choosing bash for exactly that reason. It seemed to be the tried, tested and true choice for most *nix users/programmers. It's probably a lot like C/C++. There are so many other languages out there but there are a dedicated core who continue to use C++. I figured I would start with the most common language so if I got stuck, I would have a better hope of finding answers with a simple search. I figured I would try to branch out once I got a better understanding of the basics.
    I have a somewhat different opinion of bash, personally: As Unix shells go it's pretty good, but I feel like people make it their default choice because other people have done the same - not necessarily because it's the best choice. And then when doing things in bash has turned out to be too cumbersome or difficult they turn to Perl or Python - and still it never occurs to most that there's something missing in bash.

    Of course, when I think about what the shell ought to be - I tend to think somewhat more along the lines of MS Powershell in terms of capabilities... That kind of approach would be a real hard sell to Linux users, I think.

    RE: Wait function

    Me either! Oh well, I guess it wasn't meant to be. Who knows, maybe I'll eventually find out why. In the meantime, I appreciate the info and help.
    Maybe I'll look some more into that... I'm curious...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •