Monitoring Commands

(top program for this)

Click here to see my new discovery (thanks to a user on reddit for helping me find this)


Here is watch_execve which works for FreeBSD:

(top 2 scripts first)

Ever wonder what are all of the commands ran on your unit in a given time period, or during a certain operation. Lets say you have a linux server and you click something on a GUI application (that runs on that server, maybe its some sort of management interface that initiates backups) and ever wonder, for example, “when I click this backup button, what does it actually do in the linux back end? I mean it says it does an rsync, but what are all of the switches and options that it actually runs with that rsync command“. Well if that GUI app runs a linux command, with this you can find out what it was.

This article is really long, and has many commands with different variations of output. I selected the best 2 commands (the ones I feel will be used the most). Both have a set of cons and pros.

At top I like to point out the winners (the best commands, I feel will get used the most) so if your using this for a reference you dont have to scroll much. Here is the ones that will get used the most, because they have the best resolution (they might not give the most interesting output at first, but the analytical aspect here is the best.)

Use either command1 or command2 based on if you need to capture short running commands or slow running commands. short running commands like ls need higher resolution capture like that of command2. Slow commands like copys or moves or rsyncs or daemons can be captured with command1. Command2 will capture the most commands but the output is sorted and cant be followed.

(command1) The winner from PFS is Tee Version (OUTPUT: output is on screen and saved to file with timestamp – TO USE: just copy paste into shell and press enter or make a script out of it and run that, while its running run the operations that you want to see the commands of, they will appear on the screen – note this captures like 90% of the commands that it runs – by 90% i just mean not all of the commands, some quick commands get passed):

PRO: scripts output can be followed using a program like tail -f, with the most recent ran commands appearing at the bottom of the list. In fact just running the script below will show the output in a tail -f fashion so that you dont have to actually run another tail -f command. So just copy paste the script below into a script and hit enter.

CON: Slower resolution, might miss a few short running commands like ls, netstat etc.

(command2) The winner from MONITORING MOST COMMANDS section (OUTPUT: saves to file all-comm.txt, TO USE: copy paste and press enter or run as script, then do the operations you want to record the commands of, after you think those commands are done, then close out of the monitoring script with Control-C):

PRO: high resolution captures most of the fast commands (maybe even all). Can capture commands like ls and cat etc.

CON: script output is sorted so cant follow output with a command like tail -f (since the most recent commands aren’t appeneded to the bottom, they are sorted thru the entirety of the file, so a follow type of command like tail -f wouldnt make sense and it wouldnt work)

You can read the rest of the article for other variations and explanations etc.

tl;dr from here on out



Audit commands that are by the system (although you cant see the user, but you can just copy paste that script into a shell and follow all of the commands)

This will generate a list of all the processes, keep doing that continously, but only adding new processes. You can follow the output file (psf.txt, you can change the name of that file by editing the value of $OUT1) with tail -f

Can make a script with the below program and run it like this:


NOTE: Output saves to psf.txt

Also You can just copy paste the script and you will see it

You can follow the resulting psf.txt with


Turning off psf can be done with kill or kill -9 on the PID or you can control-C it as well. You can restart the command and it will use the existing results as start point. If you want to clear the results, do this first



NOTE: all results dump to current directory

NOTE: I had to do a hack because grep has a bug/feature, when you feed the following input to it it thinks its an argument and freezes: grep “-bash” File. Instead you need to look for it like this: grep “\-bash” File. I also had to save the contents to the file like so \-bash instead of -bash so that the next time it sees it, it acknowledges its existance. If instead I appended -bash instead of \-bash then when the grep for \-bash runs we would have issues.

NOTE: I could do regular or extended grap, or word grep you would get out of range issues because so program names might have characters that would mean grep to run sometihng. So the best option is to run grep with -F

NOTE: some variables are called like this $VAR and some like this ${VAR} why the difference? Lazyness. Whocares, same result.

Try all of the methods below, most require tailing the result file in another shell to see result and the same time different statistics are printed on the screen (the more statistics, the slower the resolution). The first one called TEE VERSION doesnt require tailing, just run it, and all of the results can be seen from same shell.




This version of the script has visual output

S = Start
F = Finish
. = when a process is checked that is already listed
+ = when a new process is found that is appended to list

This version of the script has visual output

S = Start
F = Finish
. = when a process is checked that is already listed
+ = when a new process is found that is appended to list

Also the number of proccesses found on that loop (total number scanned) are listed are the scan


NOW VISUAL – fastest resolution (once everything is processed you know)

When you write less to screen each while loop can go thru faster


NOW VISUAL – fastest resolution – blank

When you write less to screen each while loop can go thru faster


Below is older method with which you couldnt follow the output with tail, because it sorted the output among other limitations.

So the above is by far the best.



First off useful links – these links do the same thing as i am planning to do but they involve changing configs, or installing programs, where as mine is just a monitoring command that is run using the system tools already available to us:

Save every session to a hidden script/typescript file:

Modify the way hist control saves:

Save each command to syslog cmdlog:

Same idea:


Command audit program:

Now time for my program – it differs in that you can run it at anytime you dont need to setup an environtment for it, or install any commands, or change any config files

The goal of this script is to generate a log of every command ran (no timestamps unfortunatly, this is a simple version). All logs will be saved into one file here denoted by the variable OUT1 (you can change it if you want). Why do I say most commands are recorded and not all? Because the ps, process, snapshots occur on a timely basis and its possible its taken during a time when a program didnt run. Example Snapshot happens, Program 400 runs and quit, then another snapshot happens missing the fact that program 400 ran.

This gives every process info

This just list the command part (so we dont have to awk)

If we want to sort the output (not yet used in the main working model below)


The rest is simple programming.

We keep appending to a file, and always sorting that file. And always running uniq on the output file so that it doesnt grow infinitely big.


Note: Can change where file saves by changing OUT1 variable. Currently saves to current directory file name “all-comm.txt”

The output also shows you how big that file is in Lines/Words and Characters(bytes). Run any of these commands and when done monitoring press control-C. Look thru the file with vi,cat or whatever (grep if you want to)

WITH WATCH COMMAND (watch can be used like a while loop to repeat a command[s]):


Look thru results with vi, or cat or grep and cat them.

Example: To see all of the rsync commands that where run on the system during monitoring (note you can run this while the monitoring is still happening):

Update Best way to monitor with above commands

Since we only have a tenth of a second window to catch all of the programs, lets lower that, and also lets not output extra info on screen that we dont need to as that takes extra time. Lets also just use the simpler program so not watch but a while loop. The simpler the faster, the better the resolution the more accurate the results.



More to come… (UPDATE, it just came read PFS above)

Make the output followable:

Problem with above script is that to remove duplicates one must sort, so you cant follow the script anymore:

Sorting ps output

In Linux and SysV5:

In Linux:


What I need:

Perhaps something like this (currently this is werid):

In another shell:

 Deduplicate scripts thanks to:

Using this one awk ' !x[$0]++'  by Michael Hoffman.

Below are excerpts from link, i might use to make above scripts better:

Maybe can use different deduplication scripts:
Michael Hoffman’s solution above is short and sweet. For larger files, a Schwartzian transform approach involving the addition of an index field using awk followed by multiple rounds of sort and uniq involves less memory overhead. The following snippet works in bash

 Thanks 1_CR! I needed a “uniq -u” (remove duplicates entirely) rather than uniq (leave 1 copy of duplicates). The awk and perl solutions can’t really be modified to do this, your’s can! I may have also needed the lower memory use since I will be uniq’ing like 100,000,000 lines 8-). Just in case anyone else needs it, I just put a “-u” in the uniq portion of the command:


Leave a Reply

Your email address will not be published.