10May/22

List All Directories with Unextracted Rar files

When downloading from the internet (example from Torrents), you might have a lot of directories with rar files. Some sources provide most to all of their content archived in single or multiple rar files. Usually one extracts those files immediately after download, but sometimes you can get lost in the amounts of directories and lose track of which folders/directories have rar files that were not yet extracted.

Using this bash command (works in linux/cygwin/wsl/unix/mac) you can find out which of your subdirectories have unextracted rars.

First create this script ListUnextractedRARs.sh. I just have this in my downloads directory, as thats my common place to run it from. However you can put it in your $PATH so that you can call it easily from any path. This works because script looks for unextracted rars in the current working directory. Make sure its executable by its permissions.

Now run that script

So from this output, we know that AwesomeTools has an unextracted rar/rars files.

24Mar/22

Bypassing ssh Permissions Too Open Warning

This trick works for Linux servers and also if you are using WSL2 on Windows to run Ubuntu (or whatever *nix OS) and you are trying to utilize ssh keys with your ssh operations (rsync, ssh, SCP, etc), then you might come across the issue of ssh key having too open of permissions. Error message: WARNING: UNPROTECTED PRIVATE KEY FILE!. So you might try to chmod them to be correct permissions for example chmod 600 <yourkey>, however, you noticed that it didn’t fix the issue because the permissions remained the same. You can’t change the permissions as these might be sitting on a Windows partition (at least at the time of writing this, I couldn’t figure out how to change them). One option is to copy them to your Linux partitions like to your ~ directory. However, what if you don’t want to have multiple copies of your keys laying about. The problem with ssh is that there is no way to bypass this warning. It’s hardcoded into it. In this serverfault link you can see that actual part of the code. That link provides several workarounds, one is that you can rewrite that bit of the code to be ignored and recompile. However, that’s not a convenient option. Another option is to pretend sudo into nobody user and then ssh it. This tricks it into ignoring the too open permission.

So lets say you come across this error:

Per that serverfault page you, can use trick the system to ignore the permissions by running ssh thru the nobody user:

sudo -u nobody ssh -i <path to identity file> <ssh server>

With this fix you can connect, but it will complain about the authenticity of the host. You will need to type “yes” and hit Enter to get in, but you will get in

ssh -i /mnt/d/path/to/your/key/idrsa.openssh user@yourserver.com

Could not create directory ‘/nonexistent/.ssh’.
The authenticity of host ‘user@yourserver.com (1.2.3.4)’ can’t be established.
ECDSA key fingerprint is SHA256:QvEcr3Rm+Y8cD5sdfgsdfgsdfgjUuO+VFgUYMPlzBmAATg.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
yourserver:~# 

So now you can add the trick from this article, which I wrote a couple year back: http://www.infotinks.com/ignoring-ssh-authenticity-of-host

So with the final command, you can bypass having to type “yes” and Enter.

sudo -u nobody ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /mnt/d/path/to/your/key/idrsa.openssh user@yourserver.com

You will get right in without any prompts, but you will see some warnings you can rightfully ignore:

Could not create directory ‘/nonexistent/.ssh’.
Warning: Permanently added ‘www.infotinks.com,168.235.98.183’ (ECDSA) to the list of known hosts.
Linux www.infotinks.com 3.16.0-1160.21.1.vz7.174.13 #1 SMP Thu Apr 22 16:18:59 MSK 2021 x86_64
yourserver:~# 

For this it will ask you for your users password (luckily not everytime), so you will need to use tricks to bypass that. If you are root it will not ask. If you know of other tricks, leave them in the comments.

20Mar/22

Parallizing nmap scan with xargs. Get list of IPs with Angry IP

First we get all of the known hosts with Angry IP scanner. Then we launch a parallel nmap scan.

Get a list of possibly alive IPs with angry ip

Download angry IP: https://angryip.org/download

Install and open the app.

Change fetchers to have the following selected: Ping, Hostname, Ports, MAC Vendor, MAC Address, Web detect, TTL, NetBIO Info.

Click the GEAR icon to get to the Preferences where you can change which ports to scan. Change port to scan to scan these ports: 20-23,24,25,80,111,443,3389,2049,4420-4422,8080

Click OK

Click on Tools Menu -> Selection -> Alive

Scroll thru the list, if any are missing add them to selection by holding Control and clicking on entry

Scan -> Export Selection -> Set filetype to XML -> save file to new directory

Save the file to an empty directory, as we will be working here and generating more files

Now lets run parallel NMAPs against the results

Open cygwin or bash into the directory

In the simplest case we could do this. This will create an nmap-output directory and save all of the nmap results there. It will launch 5 nmap processes at a time until completion. If there is less then 5 IP or nmaps to run, thats okay, it will only launch that many.

However, we want to print date start and end:

Lets break down the different parts, so you can understand this long command:

mkdir nmap-output # creates the output directory nmap-output where we will dump all of the nmap results to

cat *xml | grep "host add" | cut -f2 -d'"' | sort | uniq # extracts ip addresses

We then send the IP addresses to xargs. Each IP address becomes {}. So we can use {} as the “variable” representing the IP. Here we launch 5 parallel processes. If one stops or finishes the next one continues. If all 5 end at same time, then we spawn 5 new ones to continue. This goes on until it completes. So you can change that 5 to be another number

xargs -I{} -P5 bash -c 'nmap -vA {} &> nmap-output/nmap-vA-{}.out' &

NOTE: that the output is specified with in the bash command and has the IP address {} that way each process logs to a different file.
Each process would log to the same file, if it were ran like this:


xargs -I{} -P5 bash -c 'nmap -vA {}' &> nmap-output/nmap-vA-{}.out'

TIP: if you want to do multiple commands or have more complicated bash commands with xargs, then kick off a bash -c and put your commands in single quotes. If you need variables bring them out with:


xargs 'command '"$variable"' other commands or options';

In my case I wanted to log start and end time of the NMAP so I used a more complicated bash string in my xargs. Its a combo of multiple commands. Without surrounding that with bash -c it wouldn’t have been possible:
(CMD="nmap -vA {}"; echo "DATE START: date date +%s :$CMD"; eval "$CMD"; echo "DATE END: date date +%s") &> nmap-output/nmap-vA-{}.out

Monitor the output with:

You will see the files grow and also you will see if the job is still running.

Now its up to you what you want to do with the results.

14Mar/22

Testing Fake 2TB thumb drive

Ordered a 2TB thumb drive to use for backups off Amazon. It was only 30$ so I gave it a shot. I should have known as Sandisks are in the hundred or more dollar range.

The drive: https://www.amazon.com/gp/product/B09TPDPDW8/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&psc=1

Anyhow, it was acting suspicious off the bat. It came with NTFS and I tried to immediately put ext on it. I loaded it on my NAS but it couldnt mount it (that could just be my NAS not liking NTFS). So I overwrote it with ext4 and it wouldnt mount.

I even tried to partition it with GPT and MBT. I was able to get it to write ext4 to it. However it couldn’t mount it. I would get this error.

[85359.229884] EXT4-fs (sdg1): error loading journal

So I went online to check for tools to check real USB drive sizes and came across a tool called f3.

f3: https://fight-flash-fraud.readthedocs.io/en/latest/introduction.html

It comes with 2 tools that are important

f3write, f3probe, and a few others. f3write is for testing drives that can mount, it writes to a mount point. f3probe is for drives that can’t mount, it writes to a device.

This was all done on my ReadyNAS 6.10 (intel version) that uses Debian base linux system. Not sure if this will work on an arm version; but it worth a try if you have it. When I apt install f3, it was missing f3probe which is what I need. So I looked into getting a docker setup, that way I can install it without any problems as it will all be containerized.

Step 1. Install docker on my Readynas. There is a free app that you can install on the Readynas called “Docker CE CLI”. Install it and then you have access to the “docker” commands.

Step 2. Identify what the device file for your drive is.

lsblk

cat /proc/partitions

I see that my drive is /dev/sdg. We know its the write device because its the last listed device and its also /dev/sdg. You can also run dmesg and see what device the usb loaded into:

dmesg | grep -A10 usb

Look for the messages that refer to the usb loading up and then then the drive device coming up

[70327.204021] usb 2-3: new high-speed USB device number 3 using ehci-pci
[70327.319309] usb-storage 2-3:1.0: USB Mass Storage device detected
[70327.319561] scsi host7: usb-storage 2-3:1.0
[70328.320665] scsi 7:0:0:0: Direct-Access General UDisk 5.00 PQ: 0 ANSI: 2
[70328.321672] sd 7:0:0:0: Attached scsi generic sg6 type 0
[70328.322248] sd 7:0:0:0: [sdg] 4096000000 512-byte logical blocks: (2.10 TB/1.91 TiB)
[70328.322999] sd 7:0:0:0: [sdg] Write Protect is off
[70328.323014] sd 7:0:0:0: [sdg] Mode Sense: 0b 00 00 08
[70328.323640] sd 7:0:0:0: [sdg] No Caching mode page found
[70328.323644] sd 7:0:0:0: [sdg] Assuming drive cache: write through
[70328.329546] sdg: sdg1
[70328.332884] sd 7:0:0:0: [sdg] Attached SCSI removable disk
[80103.309662] sdg: sdg1

Step 3.

Launch into the f3 container. If you have never downloaded this container, It will first download it <- thats one of the beautiful aspects of docker. Note we also tell it that we want access to the /dev/sdg device.

docker run -it --rm --device /dev/sdg peron/f3 bash

Step 4.

Now you are in docker, check that you still have the same partitions.

I was able to see all of my partitions (so perhaps –device /dev/sdg was not necessary), so we have to becareful with the next command. Do not accidentally typo and put in one of your raids devices.

f3probe --destructive --time-ops /dev/sdg

Wait this out and explore the results. For me it took like 15 minutes.:

root@92c49346f033:/f3# f3probe --destructive --time-ops /dev/sdg
F3 probe 7.2
Copyright (C) 2010 Digirati Internet LTDA.
This is free software; see the source for copying conditions.

WARNING: Probing normally takes from a few seconds to 15 minutes, but
it can take longer. Please be patient.

Bad news: The device /dev/sdg' is a counterfeit of type limbo

You can "fix" this device using the following command:
f3fix --last-sec=104972831 /dev/sdg

Device geometry:
Usable size: 50.05 GB (104972832 blocks)
Announced size: 1.91 TB (4096000000 blocks)
Module: 2.00 TB (2^41 Bytes)
Approximate cache size: 1.00 MB (2048 blocks), need-reset=no
Physical block size: 512.00 Byte (2^9 Bytes)

Probe time: 5'25"
Operation: total time / count = avg time
Read: 263.5ms / 4212 = 62us
Write: 5'24" / 53412 = 6.0ms
Reset: 0us / 1 = 0us

Step 5. Draw your conclusion and exit

In our case we see this drive is clearly only 50G and not the reported 2TB

If you want to fix the reporting of this drive being 2TB and report its correct size of 50GB run the command:

f3fix --last-sec=104972831 /dev/sdg

root@98caf6f985b6:/f3# f3fix --last-sec=104972831 /dev/sdg
F3 fix 7.2
Copyright (C) 2010 Digirati Internet LTDA.
This is free software; see the source for copying conditions.

Error: Error informing the kernel about modifications to partition /dev/sdg1 -- Permission denied. This means Linux won't know about any changes you made to /dev/sdg1 until you reboot -- so you shouldn't mount it or use it in any way before rebooting.
Error: Partition(s) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64 on /dev/sdg have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
Drive
/dev/sdg' was successfully fixed

According to the output we need to reboot the system (or take out the flash drive and use it anywhere else) to see the new correct size. Also we can’t forget to repartition and reformat with a correct filesystem.

Now we can exit.

exit

docker ps -a

The above command will show the f3 container is no longer running as we used –rm command. If we hadn’t when we exited it would still be running in the background and we would have to kill it with docker stop <container image>

24Feb/22

How To Regex With Grep And Solve Wordle Puzzles

Wordle is an interesting 5 letter word puzzle game. To learn how to play it just watch the first video link below.

To cheat thru it with regex, first, get the allowed_words file. Then grep thru it.

First, get the allowed_words file.
Youtuber 3b1b has created a python script analyzing the best words to start with.
In his first iteration the best-started word for the game is “crane”, but his second video updates that to another word, “salet”.

First video: https://www.youtube.com/watch?v=v68zYyaEmEA
Second video: https://www.youtube.com/watch?v=fRed0Xmc2Wg

Prepare

Anyhow, download his latest wordle git to get the allowed words file

Or just download the allowed_words file like this

Note: there is a file called possible_words that has all of the English 5 letter words. When wordle was created, they didn’t use all of the words. Fun fact, the creator of the game had his girlfriend or wife select which words.

Example

So let us get started. I can show you how to do everything you need to learn with one step of the game / one example.

Let us try the word “crane”. And we end up getting:
Green, Grey, Grey, Yellow, Green

We know that green means it’s the right letter and in the right spot.
Yellow means that a letter exists in the word but it is in the wrong spot.
Grey means that letter doesn’t exist in the word.

So let us take care of the three cases: green, yellow, and grey.

case 1. We then get green on “c”, and “e”. So we know there are a “c” and “e”, and they are in the right spot.
case 2. We get yellow on “n”. So we know there is an “n”, but it is not in the 4th slot.
case 3. We get greys on “r” and “a”. So we know there is no “r” or “a”.

Note you will see that there are many ways to achieve the same result with grep.

Sidenote: grep is not the only alternative for this. Awk can be used too. Sed as well. For simplicity, we will use standard grep.

case 1:

We know that a “c” and an “e” are in the first and last spot respectively.
Dot character means it can be any character. We also know that those 3 dots, can’t be certain letters, and we will take care of that with case 2 and case 3, and when we pipe the greps together, we will get our desired end result.

case 2:

We know that we have an “n”, but it cant be in the fourth slot.

or

cat allowed_words.txt | grep "...[^n]."

But we are not done, that only takes care of an “n” not being the 4th slot. However, now we must make sure there is an “n” in there. To do that we pipe another grep

or

cat allowed_words.txt | grep "...[^n]." | grep n

Now its looking for an “n”, that is not in the 4th slot.

case 3:

We know that we can’t have an “r” or an “a” anywhere. So we can take care of that with this grep

or combine them like this (note the order of a or r doesn’t matter)

cat allowed_words.txt | grep -v "[ar]"

combining the cases:

To combine the cases we can pipe them together:

cat allowed_words.txt | grep "c...e" | grep "...[^n]." | grep n | grep -v "[ar]"

The first and second grep can actually be meshed together, giving an end result like this:

cat allowed_words.txt | grep "c..[^n]e" | grep n | grep -v "[ar]"

Here we can see the output of both commands is the same:

> cat allowed_words.txt | grep "c..[^n]e" | grep n | grep -v "[ar]"

cense conge conte

> cat allowed_words.txt | grep "c...e" | grep "...[^n]." | grep n | grep -v "[ar]"

cense conge conte

So next we know we need to try cense, conge or conte.

How do we apply multiple steps?

From above’s set of given steps and their results, I would construct the following:

> cat WORDLE_ALLOWED.txt | grep '[^e][^et][^h]i[^e]' | grep '[eth]' | grep -v "[wavolmc]"

The 1st grep takes care of the yellows (its the right letter but its in the wrong spot). Also, some spots may have multiple letters that cannot be there: The 2nd spot, as an example, can’t be an “e” and can’t be “t”.

Also the 1st grep takes care of the green, with the i.

The 2nd grep repeats the letters from the 1st grep. Specifically, the ones which we don’t know where they lay. We could include the letter we know, “i”, but it wouldn’t change the answer. We know that there should be an “e”, “t” and “h” somewhere.

The 3rd grep takes care of the letters that can’t be there, all of the grey letters. Note that we didn’t include the grey e. This is because WEAVE had 2 es, and the first e was yellow. So that means there is only 1 e in the thing. Some words might have 2 es or more. In that case, more would be yellow or green

28May/21

Python creating your own range() function with yield

Generator functions allow for yielding which is an important skill to have when you are becoming advanced with python. It basically allows creating lists but not caring about the whole list, only about the next number. This allows for time savings in many situations.

Anyhow, these are just my own practice notes on the yield call. I don’t have any clever explanations for it. Besides that when the code executes yield, it leaves the function back to where it was called (in the example its the list function), then when it goes back in (in the example: the list function will attempt to go thru the entirety of the function being yielded), it remembers at which line it was and continues (it keeps on looping). For your own exercise, you can add print() methods to see how it proceeds.

Also, the list() method is not the best way to show yields, as its simply prints everything the generator is trying to express. This could have been done with none yielding code that appends a new number to a list and returns a list.

Instead, we can really show the power of these generator functions using the next() method.

Note that we define the new_range variable which becomes a generator object. Then if we were to call list() on this object we would get [6,7,8] – not shown here as similar concept is shown in the first code output. However, if we call next(new_range), we get the next value of this generator – the key concept is that if it hasn’t been called yet it will provide the first value of the generator. So the first next() will output the first number, which is 6. Then 7. Then 8. If we call it again, it will error as it’s the end of the generator object.

Benefits: Imagine the range was r1(1,100000000). Then imagine we needed the 1000th item. We could generate the list() of it and then get the 1000th item, but that will take a while to generate the list. So instead we can just yield thru until the 1000th item without caring about the rest. Speed 🙂

Thats all folks.

16Mar/21

WebEx – Simple Python Script To Find Room By Name and Send Message To It

This is a simple python script which uses the WebEx API. WebEx is a chat system created by Cisco. However its alot more then just that, its full on collaboration tool that can be used like Slack and Zoom at the same time. So it is more then just chatting, you can share screens as well.

What we do with this example is we simply use the API to write a message to the room (or specifically a small space). First we have the API get the room list, find the room by its name, then we get that rooms ID, then we send a message to that room ID.

Sidenote: If you have the room ID already you can shortcut a lot of the code

Sidenote: A room is any space (many people) or one on one conversation that you are having.

Getting API Key For Yourself for 12 hours:

More info on the API & developing with it is found here:

https://developer.webex.com/docs/api/getting-started

In fact, you need to go to that site to get the API Key. First login at the top left corner. Then scroll thru the guide and look up any guide. A good starting point is. API Reference -> Rooms . You should find the ability to copy your API Key into the clipboard directly from the page (make sure your browser is fully expanded on the screen).

This will get a 12 hour API Key.

Getting A More Permanent API Key with a Bot:

To get a more permanent key (100 years) you have to create a WebEX API Bot / app (create bot here|more info on bots here). You give the bot a name, and a unique username, an icon (just like a user), and a description. The bot then gets a domain name tagged to it automatically with domain webex.bot – like an email username; so for bot username bot1, the full email username will be bot1@webex.bot. You can use this username to add your bot to your room. After submitting, you will get a Bot ID and Bot Token (You use the Bot Token as your API Key; its good for 100 years).

Sidenote: you can create bots, integrations, and guest issuers, and also login as yourself for 12 hours (explained above).

The Steps

  1. First open WebEx
  2. Create a room or space called “MyRoom-Test” (or whatever you want to call it)
  3. Make sure the user which API key is used is joined to the room.
    • If you created an APP or Bot, make sure to join it to the room.
  4. Next, make sure you fulfill your python requirements, create the script and call the script with the correct environment variables in place.

The python script uses the requests python package which needs to be installed. Requests is an HTTP[s] package for python. Allows for easy manipulation of said protocol and therefore using REST API (which WebEx relies on)

I will not go in depth here on the code or REST API as there are plenty articles online, and this is mostly for my notes.

Sidenote: One thing, I didn’t do in this code is error handling.

Other Requirements:

  • Python 3.9 as I use the f string format {var=} that was introduced then
  • requests python package which can be installed with: pip install requests

Script (Find Room and Send Message):

Note: that you can use the api.ciscospark.com and webexapis.com endpoint when calling the API, both will achieve same results.

How To Run It:

First set your WBX_KEY API Key environment variable. Replace PasteLongApiKeyHere with your actual key:

Example Output:

Personal information has been edited for security reasons.

Also, you will see your “test message” in the actual WebEx application. If you used your API key it will come from you. If you used your Bots API key it will come from your bot.

The Script Assuming You Know The RoomID:

If you know the roomID you can make the code alot smaller obviously as we don’t have to find the roomID by title.

Then you would call this code like this:

First set your WBX_KEY api key environment variable by replacing PasteLongApiKeyHere with your actual key. Do the same but with the long Room ID for PasteLongRoomIDHere.

The output will be similar to the previous section but shorter as we don’t have to find the room and you will get a text message in your WebEx as expected.

The end,

11Mar/21

Trick to recall “ln -s” symlink argument order

I can never remember the order of arguments for the ln command (link command).

The easiest way to think about it is to rethink of the “cp” (copy command) instead of “cp source destination” think of it as “cp existing new“. With symlinks, it is hard to comprehend what is the source and what is destination. Switching your thinking to understanding what exists and what will be new helps. So the argument order is just like it is with cp, existing then new. “ln -s existing new” does the trick.

Rule of thumb:

Also apparently, I am not the only one that has troubles with this as there is this link – https://news.ycombinator.com/item?id=1984456 . I stole the trick of the top comment and made an article, now I am bound to not forget (I hope).

The end

04Mar/21

Sublime Text Editor – Show All Whitespace

The ability to see all whitespace (spaces, tabs, etc) is not very clear in sublime and not easily accessible.

You can change it by doing this:

Open your user settings (Preferences -> Settings) and add this item in to your settings in between the { and }.

Make sure to properly end it with a comma if its not the last item in the dict, and if it is the last item in the dict, you can remove the final comma character.

However, you easily set yourself some keyboard shortcuts by editing the keymap (Preferences -> Key Bindings). Note put this in between your [ and ] and characters (They keybindings is a list of dicts with keys of “keys”, “command” and “args”).

Likewise, make sure to remove the comma at the end if its the last item and keep it if you have further items in there.

Shift+Alt+D to show all whitespace

Shift+Alt+A to revert to normal

On a MAC replace Alt with Option key

Sidenote: My key bindings so far are simple so its just:

11Dec/20

rhood – robinhood portfolio analysis tool (better net profits per symbol)

Github: https://github.com/bhbmaster/rhood <- download location. install & run instructions.

Using the robin-stocks python module, I created my own robinhood portfolio analyzer called rhood. It parses all of your Robinhood account information provided by the API and outputs a single text output containing all of your portfolio information, order + open positions + dividend information. Mainly, it parses all of your orders and outputs sorted orders, open positions, informative profits, and dividend information. It provides a good figure to your total net gain, and net gain per any position currently owned and previously owned (currently the default Robinhood app doesn’t show this information nicely).

It requires python, the robin-stocks pip package (robinhood api client), and pyopt pip package (2factor authentication module). It can be run on Windows, MAC, or Linux.

This prints a lot of information about your stocks, crypto, and options (see note 1):

  • all of the orders (creates csvs from them as well)
  • all open positions
  • net profit calculations

It provides useful information that I couldn’t find on robinhood app itself; i.e. your profit per stock. Robinhood has a section to show total return, however that seems to clear out if you sell the whole stock. My application doesn’t do that, and it shows you total profit (or loss) for each symbol: stock, crypto, option (see note 1).

* Note 1 – Work in progress: options are not implemented yet. So if you are only using stocks and or crypto you are set, otherwise options are skipped/ignored.

* Note 2: I used the robin-stocks module. However I see there are some other modules that talk with the robinhood API as well. I didn’t use these, as they seem to be older. https://github.com/robinhood-unofficial/pyrh and here https://github.com/mstrum/robinhood-python