Du kan inte välja fler än 25 ämnen Ämnen måste starta med en bokstav eller siffra, kan innehålla bindestreck ('-') och vara max 35 tecken långa.
trimstray 932a38075b updated Secret Knowledge 6 år sedan
doc updated awesome_ninja_admins.png 6 år sedan
lib project reorganization 6 år sedan
skel project reorganization 6 år sedan
src project reorganization 6 år sedan
.gitignore init commit 6 år sedan
.travis.yml init commit 6 år sedan
CODE_OF_CONDUCT.md added code of conduct 6 år sedan
CONTRIBUTING.md fix in CONTRIBUTING.md 6 år sedan
LICENSE.md init commit 6 år sedan
README.md updated Secret Knowledge 6 år sedan

README.md

Master


A collection of awesome lists, manuals, blogs, hacks, one-liners and tools for Awesome Ninja Admins.


Branch Awesome License

Created by trimstray and contributors


Who is Ninja Admins?

  • race of pure evil who rule the network through a monarchist feudal system
  • they never opened the door for strangers (or anyone at all)
  • they know very nasty piece of code like a fork bombs
  • they can make dd is not a destroyer of disks
  • they know that #!/usr/bin/env bash superior to #!/bin/bash
  • they know that su - logs in completely as root
  • they miss and cry for Slackware on production
  • they love the old admin nix-world

☑️ Todo

Ninja Admins Collection

CLI Tools

▪️ Shells

  🔸 Oh My ZSH! - the best framework for managing your Zsh configuration.
  🔸 bash-it - framework for using, developing and maintaining shell scripts and custom commands for your daily work.
  🔸 Oh My Fish - the Fishshell framework.

▪️ Managers

  🔸 Midnight Commander - visual file manager, licensed under GNU General Public License.
  🔸 screen - full-screen window manager that multiplexes a physical terminal.
  🔸 tmux - terminal multiplexer, lets you switch easily between several programs in one terminal.

▪️ Network

  🔸 Curl - command line tool and library for transferring data with URLs.
  🔸 HTTPie - a user-friendly HTTP client.
  🔸 gnutls-cli - client program to set up a TLS connection to some other computer.
  🔸 nmap - free and open source (license) utility for network discovery and security auditing.
  🔸 hping - command-line oriented TCP/IP packet assembler/analyzer.
  🔸 masscan - the fastest Internet port scanner, spews SYN packets asynchronously.
  🔸 netcat - networking utility which reads and writes data across network connections, using the TCP/IP protocol.
  🔸 tcpdump - powerful command-line packet analyzer.
  🔸 tshark - dump and analyze network traffic (wireshark cli).
  🔸 fierce - a DNS reconnaissance tool for locating non-contiguous IP space.
  🔸 sublist3r - fast subdomains enumeration tool for penetration testers.

▪️ SSL

  🔸 sslyze - fast and powerful SSL/TLS server scanning library.
  🔸 sslscan - tests SSL/TLS enabled services to discover supported cipher suites.
  🔸 testssl.sh - testing TLS/SSL encryption anywhere on any port.

▪️ Auditing Tools

  🔸 lynis - battle-tested security tool for systems running Linux, macOS, or Unix-based operating system.

▪️ System Diagnostics/Debuggers

  🔸 strace - diagnostic, debugging and instructional userspace utility for Linux.
  🔸 sysdig - system exploration and troubleshooting tool with first class support for containers.
  🔸 lsof - displays in its output information about files that are opened by processes.
  🔸 FlameGraph - stack trace visualizer.
  🔸 lsofgraph - small utility to convert Unix lsof output to a graph showing FIFO and UNIX interprocess communication.

▪️ Log Analyzers

  🔸 GoAccess - real-time web log analyzer and interactive viewer that runs in a terminal.

▪️ Databases

  🔸 usql - universal command-line interface for SQL databases.
  🔸 pgcli - postgres CLI with autocompletion and syntax highlighting.
  🔸 mycli - terminal client for MySQL with autocompletion and syntax highlighting.

Web Tools

▪️ SSL

  🔸 SSL Server Test - free online service performs a deep analysis of the configuration of any SSL web server.
  🔸 SSL Server Test (DEV) - free online service performs a deep analysis of the configuration of any SSL web server.
  🔸 ImmuniWeb® SSLScan - test SSL/TLS (PCI DSS, HIPAA and NIST).
  🔸 urlscan.io - service to scan and analyse websites.
  🔸 Report URI - monitoring security policies like CSP and HPKP.
  🔸 CSP Evaluator - allows developers and security experts to check if a Content Security Policy.
  🔸 cipherli.st - strong ciphers for Apache, Nginx, Lighttpd and more.
  🔸 badssl.com - memorable site for testing clients against bad SSL configs.
  🔸 tlsfun.de - registered for various tests regarding the TLS/SSL protocol.
  🔸 Common CA Database - repository of information about CAs, and their root and intermediate certificates.
  🔸 CERTSTREAM - real-time certificate transparency log update stream.

▪️ HTTP Headers

  🔸 Security Headers - analyse the HTTP response headers (with rating system to the results).
  🔸 Observatory by Mozilla - set of tools to analyze your website.
  🔸 Enable CORS - enable cross-origin resource sharing.

▪️ DNS

  🔸 ViewDNS - one source for free DNS related tools and information.
  🔸 DNSlytics - online investigation tool.
  🔸 DNS Spy - monitor, validate and verify your DNS configurations.

▪️ Mail

  🔸 MX Toolbox - all of your MX record, DNS, blacklist and SMTP diagnostics in one integrated tool.
  🔸 blacklistalert - checks to see if your domain is on a Real Time Spam Blacklist.
  🔸 MultiRBL - complete IP check for sending Mailservers.

▪️ Mass scanners (search engines)

  🔸 Censys - platform that helps information security practitioners discover, monitor, and analyze devices.
  🔸 Shodan - the world's first search engine for Internet-connected devices.
  🔸 GreyNoise - mass scanner (such as Shodan and Censys).

▪️ Net-tools

  🔸 Netcraft - detailed report about the site, helping you to make informed choices about their integrity.
  🔸 RIPE NCC - not-for-profit membership association, a Regional Internet Registry and the secretariat for the RIPE.
  🔸 Security Trails - APIs for Security Companies, Researchers and Teams.
  🔸 Online Curl - curl test, analyze HTTP Response Headers.
  🔸 Ping.eu - online Ping, Traceroute, DNS lookup, WHOIS and others.
  🔸 Network-Tools - network tools for webmasters, IT technicians & geeks.
  🔸 URL Encode/Decode - tool from above to either encode or decode a string of text.
  🔸 RegExr - online tool to learn, build, & test Regular Expressions (RegEx / RegExp).
  🔸 Hardenize - deploy the security standards.

▪️ Code parsers/playgrounds

  🔸 ShellCheck - finds bugs in your shell scripts.
  🔸 jsbin - live pastebin for HTML, CSS & JavaScript and more.

▪️ Performance

  🔸 GTmetrix - analyze your site’s speed and make it faster.
  🔸 Sucuri loadtimetester - test here the performance of any of your sites from across the globe.

▪️ Passwords

  🔸 Random.org - generate random passwords.
  🔸 Gotcha? - list of 1.4 billion accounts circulates around the Internet.
  🔸 have i been pwned? - check if you have an account that has been compromised in a data breach.

Manuals/Howtos/Tutorials

▪️ Bash

  🔸 pure-bash-bible - a collection of pure bash alternatives to external processes.
  🔸 The Bash Hackers Wiki - hold documentation of any kind about GNU Bash.

▪️ Unix tutorials

  🔸 nixCraft - linux and unix tutorials for new and seasoned sysadmin.
  🔸 TecMint - the ideal Linux blog for Sysadmins & Geeks.

▪️ Security

  🔸 OWASP - worldwide not-for-profit charitable organization focused on improving the security of software.
  🔸 Hacking Articles - LRaj Chandel's Security & Hacking Blog.
  🔸 GTFOBins - list of Unix binaries that can be exploited by an attacker to bypass local security restrictions.
  🔸 AWS security tools - make your AWS cloud environment more secure.

▪️ Web Apps

  🔸 Mozilla Web Security - help operational teams with creating secure web applications.

▪️ Secret Knowledge

  🔸 CTF Series : Vulnerable Machines - the steps below could be followed to find vulnerabilities and exploits.
  🔸 How to start RE/malware analysis? - collection of some hints and useful links for the beginners.
  🔸 LZone Cheat Sheets - all cheat sheets.

Blogs

  🔸 Brendan Gregg's Blog - Brendan Gregg is an industry expert in computing performance and cloud computing.
  🔸 Gynvael "GynDream" Coldwind - Gynvael is a IT security engineer at Google.
  🔸 Michał "lcamtuf" Zalewski - "white hat" hacker, computer security expert.
  🔸 Mattias Geniar - developer, Sysadmin, Blogger, Podcaster and Public Speaker.
  🔸 Nick Craver - Software Developer and Systems Administrator for Stack Exchange.
  🔸 Robert Penz - IT security Expert.
  🔸 Scott Helme - Security Researcher, international speaker and founder of securityheaders.com and report-uri.com.
  🔸 Kacper Szurek - Detection Engineer at ESET.
  🔸 Troy Hunt - Microsoft Regional Director and Microsoft Most Valuable Professional for Developer Security.
  🔸 Linux Audit - the Linux security blog about Auditing, Hardening and Compliance by Michael Boelen.
  🔸 Linux Security Expert - trainings, howtos, checklists, security tools and more.
  🔸 The Grymoire - collection of useful incantations for wizards, be you computer wizards, magicians, or whatever.

Systems/Services

▪️ Systems

  🔸 Slackware - the most "Unix-like" Linux distribution.
  🔸 OpenBSD - multi-platform 4.4BSD-based UNIX-like operating system.
  🔸 HardenedBSD - HardenedBSD aims to implement innovative exploit mitigation and security solutions.

▪️ HTTP(s) Services

  🔸 Varnish HTTP Cache - HTTP accelerator designed for content-heavy dynamic web sites.

▪️ Security/hardening

  🔸 Emerald Onion - seattle-based encrypted-transit internet service provider.
  🔸 Unbound - validating, recursive, and caching DNS resolver (with TLS).
  🔸 Knot Resolver - caching full resolver implementation, including both a resolver library and a daemon.

Awesome Lists

  🔸 Awesome Sysadmin - amazingly awesome open source sysadmin resources.
  🔸 Awesome Shell - awesome command-line frameworks, toolkits, guides and gizmos.
  🔸 Awesome Hacking - awesome lists for hackers, pentesters and security researchers.
  🔸 Awesome Hacking Resources - collection of hacking/penetration testing resources to make you better.
  🔸 Movies for Hackers - list of movies every hacker & cyberpunk must watch.
  🔸 Awesome Pentest - collection of awesome penetration testing resources, tools and other shiny things.
  🔸 Awesome Pcaptools - collection of tools developed by other researchers to process network traces.
  🔸 Command-line-text-processing - from finding text to search and replace, from sorting to beautifying text and more.
  🔸 Awesome Scalability - best practices in building High Scalability, High Availability, High Stability and more.
  🔸 Awesome Postgres - list of awesome PostgreSQL software, libraries, tools and resources.
  🔸 Free Security eBooks - list of a Free Security and Hacking eBooks.

Hacking/Penetration testing

▪️ Bounty programs

  🔸 Openbugbounty - allows any security researcher reporting a vulnerability on any website.
  🔸 hackerone - global hacker community to surface the most relevant security issues.
  🔸 bugcrowd - crowdsourced cybersecurity for the enterprise.
  🔸 Crowdshield - crowdsourced Security & Bug Bounty Management.

▪️ Web Training Apps (local installation)

  🔸 Metasploitable 2 - vulnerable web application amongst security researchers.
  🔸 DVWA - PHP/MySQL web application that is damn vulnerable.
  🔸 OWASP Mutillidae II - free, open source, deliberately vulnerable web-application.
  🔸 OWASP Juice Shop Project - the most bug-free vulnerable application in existence.
  🔸 OWASP WebGoat Project - insecure web application maintained by OWASP designed to teach web app security.
  🔸 Security Ninjas - open source application security training program.

▪️ Labs

  🔸 Hack The Box - online platform allowing you to test your penetration testing skills.
  🔸 Hacking-Lab - online ethical hacking, computer network and security challenge platform.
  🔸 pwnable.kr - non-commercial wargame site which provides various pwn challenges regarding system exploitation.

One-liners

Table of Contents

System

Tool: terminal
Reload shell without exit
exec $SHELL -l
Close shell keeping all subprocess running
disown -a && exit
Exit without saving shell history
kill -9 $$
unset HISTFILE && exit
Perform a branching conditional
true && { echo success;} || { echo failed; }
Pipe stdout and stderr to separate commands
some_command > >(/bin/cmd_for_stdout) 2> >(/bin/cmd_for_stderr)
Redirect stdout and stderr each to separate files and print both to the screen
(some_command 2>&1 1>&3 | tee errorlog ) 3>&1 1>&2 | tee stdoutlog
List of commands you use most often
history | \
awk '{CMD[$2]++;count++;}END { for (a in CMD)print CMD[a] " " CMD[a]/count*100 "% " a;}' | \
grep -v "./" | \
column -c3 -s " " -t | \
sort -nr | nl |  head -n 20
Quickly backup a file
cp filename{,.orig}
Empty a file (truncate to 0 size)
>filename
Delete all files in a folder that don’t match a certain file extension
rm !(*.foo|*.bar|*.baz)
Pass multi-line string to a file
# cat  >filename ... - overwrite file
# cat >>filename ... - append to file
cat > filename << __EOF__
data data data
__EOF__
Edit a file on a remote host using vim
vim scp://user@host//etc/fstab
Create a directory and change into it at the same time
mkd () { mkdir -p "$@" && cd "$@"; }
Convert uppercase files to lowercase files
rename 'y/A-Z/a-z/' *
Print a row of characters across the terminal
printf "%`tput cols`s" | tr ' ' '#'
Show shell history without line numbers
history | cut -c 8-
fc -l -n 1 | sed 's/^\s*//'
Run command(s) after exit session
cat > /etc/profile << __EOF__
_after_logout() {

  username=$(whoami)

  for _pid in $(ps afx | grep sshd | grep "$username" | awk '{print $1}') ; do

    kill -9 $_pid

  done

}
trap _after_logout EXIT
__EOF__
Generate a sequence of numbers
for ((i=1; i<=10; i+=2)) ; do echo $i ; done
# alternative: seq 1 2 10

for ((i=5; i<=10; ++i)) ; do printf '%02d\n' $i ; done
# alternative: seq -w 5 10

Tool: mount
Mount a temporary ram partition
mount -t tmpfs tmpfs /mnt -o size=64M
  • -t - filesystem type
  • -o - mount options
Remount a filesystem as read/write
mount -o remount,rw /

Tool: fuser
Kills a process that is locking a file
fuser -k filename
Show what PID is listening on specific port
fuser -v 53/udp

Tool: ps
Show a 4-way scrollable process tree with full details
ps awwfux | less -S
Processes per user counter
ps hax -o user | sort | uniq -c | sort -r

Tool: find
Find files that have been modified on your system in the past 60 minutes
find / -mmin 60 -type f
Find all files larger than 20M
find / -type f -size +20M
Find duplicate files (based on MD5 hash)
find -type f -exec md5sum '{}' ';' | sort | uniq --all-repeated=separate -w 33
Change permission only for files
cd /var/www/site && find . -type f -exec chmod 766 {} \;
cd /var/www/site && find . -type f -exec chmod 664 {} +
Change permission only for directories
cd /var/www/site && find . -type d -exec chmod g+x {} \;
cd /var/www/site && find . -type d -exec chmod g+rwx {} +
Find files and directories for specific user
find . -user <username> -print
Find files and directories for all without specific user
find . \!-user <username> -print
Delete older files than 60 days
find . -type f -mtime +60 -delete
Recursively remove all empty sub-directories from a directory
find . -depth  -type d  -empty -exec rmdir {} \;
find </path/to/dir> -xdev -samefile filename

Tool: top
Use top to monitor only all processes with the specific string
top -p $(pgrep -d , <str>)
  • <str> - process containing str (eg. nginx, worker)

Tool: strace
Track the open request of a network port
strace -f -e trace=bind nc -l 80
Track the open request of a network port (show TCP/UDP)
strace -f -e trace=network nc -lu 80

Tool: kill
Kill a process running on port
kill -9 $(lsof -i :<port> | awk '{l=$2} END {print l}')

Tool: diff
Compare two directory trees
diff <(cd directory1 && find | sort) <(cd directory2 && find | sort)

Tool: tail
Annotate tail -f with timestamps
tail -f file | while read ; do echo "$(date +%T.%N) $REPLY" ; done
Analyse an Apache access log for the most common IP addresses
tail -10000 access_log | awk '{print $1}' | sort | uniq -c | sort -n | tail
Analyse web server log and show only 5xx http codes
tail -n 100 -f /path/to/logfile | grep "HTTP/[1-2].[0-1]\" [5]"

Tool: tar
System backup with exclude specific directories
cd /
tar -czvpf /mnt/system$(date +%d%m%Y%s).tgz --directory=/ \
--exclude=proc/* --exclude=sys/* --exclude=dev/* --exclude=mnt/* .
System backup with exclude specific directories (pigz)
cd /
tar cvpf /backup/snapshot-$(date +%d%m%Y%s).tgz --directory=/ \
--exclude=proc/* --exclude=sys/* --exclude=dev/* \
--exclude=mnt/* --exclude=tmp/* --use-compress-program=pigz .

Tool: dump
System backup to file
dump -y -u -f /backup/system$(date +%d%m%Y%s).lzo /
Restore system from lzo file
cd /
restore -rf /backup/system$(date +%d%m%Y%s).lzo

Tool: cpulimit
Limit the cpu usage of a process
cpulimit -p pid -l 50

Tool: pwdx
Show current working directory of a process
pwdx <pid>

Tool: taskset
Start a command on only one CPU core
taskset -c 0 <command>

Tool: tr
Show directories in the PATH, one per line
tr : '\n' <<<$PATH

Tool: chmod
Remove executable bit from all files in the current directory
chmod -R -x+X *
Restore permission for /bin/chmod
# 1:
cp /bin/ls chmod.01
cp /bin/chmod chmod.01
./chmod.01 700 file

# 2:
/bin/busybox chmod 0700 /bin/chmod

# 3:
setfacl --set u::rwx,g::---,o::--- /bin/chmod

Tool: who
Find last reboot time
who -b

Tool: screen
Start screen in detached mode
screen -d -m [<command>]

Tool: du
Show 20 biggest directories with ‘K M G’
du | \
sort -r -n | \
awk '{split("K M G",v); s=1; while($1>1024){$1/=1024; s++} print int($1)" "v[s]"\t"$2}' | \
head -n 20

Tool: inotifywait
Init tool everytime a file in a directory is modified
while true ; do inotifywait -r -e MODIFY dir/ && ls dir/ ; done;

Tool: openssl
Testing connection to remote host
echo | openssl s_client -connect google.com:443 -showcerts
Testing connection to remote host (with SNI support)
echo | openssl s_client -showcerts -servername google.com -connect google.com:443
Testing connection to remote host with specific ssl version
openssl s_client -tls1_2 -connect google.com:443
Testing connection to remote host with specific ssl cipher
openssl s_client -cipher 'AES128-SHA' -connect google.com:443
Generate private key
# _ciph: des3, aes
( _ciph="des3" ; _fd="private.key" ; _len="2048" ; \
openssl genrsa -${_ciph} -out ${_fd} ${_len} )
Remove password from private key
( _fd="private.key" ; _fd_unp="private_unp.key" ; \
openssl rsa -in ${_fd} -out ${_fd_unp} )
Get public key from private key
( _fd="private.key" ; _fd_pub="public.key" ; \
openssl rsa -pubout -in ${_fd} -out ${_fd_pub} )
Generate private key + csr
( _fd="private.key" ; _fd_csr="request.csr" ; _len="2048" ; \
openssl req -out ${_fd_csr} -new -newkey rsa:${_len} -nodes -keyout ${_fd} )
Generate csr
( _fd="private.key" ; _fd_csr="request.csr" ; \
openssl req -out ${_fd_csr} -new -key ${_fd} )
Generate csr (metadata from exist certificate)
( _fd="private.key" ; _fd_csr="request.csr" ; _fd_crt="cert.crt" ; \
openssl x509 -x509toreq -in ${_fd_crt} -out ${_fd_csr} -signkey ${_fd} )
Generate csr with -config param
( _fd="private.key" ; _fd_csr="request.csr" ; \
openssl req -new -sha256 -key ${_fd} -out ${_fd_csr} \
-config <(
cat <<-EOF
[req]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn

[ dn ]
C=<two-letter ISO abbreviation for your country>
ST=<state or province where your organization is legally located>
L=<city where your organization is legally located>
O=<legal name of your organization>
OU=<section of the organization>
CN=<fully qualified domain name>

[ req_ext ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = <fully qualified domain name>
DNS.2 = <next domain>
DNS.3 = <next domain>
EOF
))
Convert DER to PEM
( _fd_der="cert.crt" ; _fd_pem="cert.pem" ; \
openssl x509 -in ${_fd_der} -inform der -outform pem -out ${_fd_pem} )
Convert PEM to DER
( _fd_der="cert.crt" ; _fd_pem="cert.pem" ; \
openssl x509 -in ${_fd_pem} -outform der -out ${_fd_der} )
Checking whether the private key and the certificate match
(openssl rsa -noout -modulus -in private.key | openssl md5 ; openssl x509 -noout -modulus -in certificate.crt | openssl md5) | uniq

Tool: gnutls-cli
Testing connection to remote host (with sni)
gnutls-cli -p 443 google.com
Testing connection to remote host (without sni)
gnutls-cli --disable-sni -p 443 google.com

Tool: secure-delete
Secure delete with shred
shred -vfuz -n 10 file
shred --verbose --random-source=/dev/urandom -n 1 /dev/sda
Secure delete with scrub
scrub -p dod /dev/sda
scrub -p dod -r file
Secure delete with badblocks
badblocks -s -w -t random -v /dev/sda
badblocks -c 10240 -s -w -t random -v /dev/sda
Secure delete with secure-delete
srm -vz /tmp/file
sfill -vz /local
sdmem -v
swapoff /dev/sda5 && sswap -vz /dev/sda5

Tool: dd
Show dd status every so often
dd <dd_params> status=progress
watch --interval 5 killall -USR1 dd
Redirect output to a file with dd
echo "string" | dd of=filename

HTTP/HTTPS

Tool: curl
curl -Iks https://www.google.com
  • -I - show response headers only
  • -k - insecure connection when using ssl
  • -s - silent mode (not display body)
curl -Iks --location -X GET -A "x-agent" https://www.google.com
  • --location - follow redirects
  • -X - set method
  • -A - set user-agent
curl -Iks --location -X GET -A "x-agent" --proxy http://127.0.0.1:16379 https://www.google.com
  • --proxy [socks5://|http://] - set proxy server
Check DNS and HTTP trace with headers for specific domains
### Set domains and external dns servers.
_domain_list=(google.com) ; _dns_list=("8.8.8.8" "1.1.1.1")

for _domain in "${_domain_list[@]}" ; do

  printf '=%.0s' {1..48}

  echo

  printf "[\\e[1;32m+\\e[m] resolve: %s\\n" "$_domain"

  for _dns in "${_dns_list[@]}" ; do

    # Resolve domain.
    host "${_domain}" "${_dns}"

    echo

  done

  for _proto in http https ; do

    printf "[\\e[1;32m+\\e[m] trace + headers: %s://%s\\n" "$_proto" "$_domain"

    # Get trace and http headers.
    curl -Iks -A "x-agent" --location "${_proto}://${_domain}"

    echo

  done

done

unset _domain_list _dns_list

Tool: httpie
http -p Hh https://www.google.com
  • -p - print request and response headers
    • H - request headers
    • B - request body
    • h - response headers
    • b - response body
http -p Hh --follow --max-redirects 5 --verify no https://www.google.com
  • -F, --follow - follow redirects
  • --max-redirects N - maximum for --follow
  • --verify no - skip SSL verification
http -p Hh --follow --max-redirects 5 --verify no --proxy http:http://127.0.0.1:16379 https://www.google.com
  • --proxy [http:] - set proxy server

Network

Tool: ssh
Compare a remote file with a local file
ssh user@host cat /path/to/remotefile | diff /path/to/localfile -
SSH connection through host in the middle
ssh -t reachable_host ssh unreachable_host
Run command over ssh on remote host
cat > cmd.txt << __EOF__
cat /etc/hosts
__EOF__

ssh host -l user $(<cmd.txt)
Get public key from private key
ssh-keygen -y -f ~/.ssh/id_rsa
Get all fingerprints
ssh-keygen -l -f .ssh/known_hosts
Ssh authentication with user password
ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no user@remote_host
Ssh authentication with publickey
ssh -o PreferredAuthentications=publickey -o PubkeyAuthentication=yes -i id_rsa user@remote_host
Simple recording SSH session
function _ssh_sesslog() {

  _sesdir="<path/to/session/logs>"

  mkdir -p "${_sesdir}" && \
  ssh $@ 2>&1 | tee -a "${_sesdir}/$(date +%Y%m%d).log"

}

# Alias:
alias ssh='_ssh_sesslog'
Using Keychain for SSH logins
### Delete all of ssh-agent's keys.
function _scl() {

  /usr/bin/keychain --clear

}

### Add key to keychain.
function _scg() {

  /usr/bin/keychain /path/to/private-key
  source "$HOME/.keychain/$HOSTNAME-sh"

}

Tool: linux-dev
Testing remote connection to port
timeout 1 bash -c "</dev/<proto>/<host>/<port>" >/dev/null 2>&1 ; echo $?
  • <proto - set protocol (tcp/udp)
  • <host> - set remote host
  • <port> - set destination port
Read and write to TCP or UDP sockets with common bash tools
exec 5<>/dev/tcp/<host>/<port>; cat <&5 & cat >&5; exec 5>&-

Tool: tcpdump
tcpdump -ne -i eth0 -Q in host 192.168.252.1 and port 443
  • -n - don’t convert addresses
  • -e - print the link-level headers
  • -i [iface|any] - set interface
  • -Q|-D [in|out|inout] - choose send/receive direction (-D - for old tcpdump versions)
  • host [ip|hostname] - set host, also [host not]
  • [and|or] - set logic
  • port [1-65535] - set port number, also [port not]
tcpdump -ne -i eth0 -Q in host 192.168.252.1 and port 443 -c 5 -w tcpdump.pcap
  • -c [num] - capture only num number of packets
  • -w [filename] - write packets to file, -r [filename] - reading from file

Tool: tcpick
Analyse packets in real-time
while true ; do tcpick -a -C -r dump.pcap ; sleep 2 ; clear ; done

Tool: ngrep
ngrep -d eth0 "www.google.com" port 443
  • -d [iface|any] - set interface
  • [domain] - set hostname
  • port [1-65535] - set port number
ngrep -d eth0 "www.google.com" (host 10.240.20.2) and (port 443)
  • (host [ip|hostname]) - filter by ip or hostname
  • (port [1-65535]) - filter by port number
ngrep -d eth0 -qt -O ngrep.pcap "www.google.com" port 443
  • -q - quiet mode (only payloads)
  • -t - added timestamps
  • -O [filename] - save output to file, -I [filename] - reading from file
ngrep -d eth0 -qt 'HTTP' 'tcp'
  • HTTP - show http headers
  • tcp|udp - set protocol
  • [src|dst] host [ip|hostname] - set direction for specific node
ngrep -l -q -d eth0 -i "User-Agent: curl*"
  • -l - stdout line buffered
  • -i - case-insensitive search

Tool: hping3
hping3 -V -p 80 -s 5050 <scan_type> www.google.com
  • -V|--verbose - verbose mode
  • -p|--destport - set destination port
  • -s|--baseport - set source port
  • <scan_type> - set scan type
    • -F|--fin - set FIN flag, port open if no reply
    • -S|--syn - set SYN flag
    • -P|--push - set PUSH flag
    • -A|--ack - set ACK flag (use when ping is blocked, RST response back if the port is open)
    • -U|--urg - set URG flag
    • -Y|--ymas - set Y unused flag (0x80 - nullscan), port open if no reply
    • -M 0 -UPF - set TCP sequence number and scan type (URG+PUSH+FIN), port open if no reply
hping3 -V -c 1 -1 -C 8 www.google.com
  • -c [num] - packet count
  • -1 - set ICMP mode
  • -C|--icmptype [icmp-num] - set icmp type (default icmp-echo = 8)
hping3 -V -c 1000000 -d 120 -S -w 64 -p 80 --flood --rand-source <remote_host>
  • --flood - sent packets as fast as possible (don’t show replies)
  • --rand-source - random source address mode
  • -d --data - data size
  • -w|--win - winsize (default 64)

Tool: netcat
nc -kl 5000
  • -l - listen for an incoming connection
  • -k - listening after client has disconnected
  • >filename.out - save receive data to file (optional)
nc 192.168.0.1 5051 < filename.in
  • < filename.in - send data to remote host
nc -vz 10.240.30.3 5000
  • -v - verbose output
  • -z - scan for listening daemons
nc -vzu 10.240.30.3 1-65535
  • -u - scan only udp ports
Transfer data file (archive)
server> nc -l 5000 | tar xzvfp -
client> tar czvfp - /path/to/dir | nc 10.240.30.3 5000
Launch remote shell
server> nc -l 5000 -e /bin/bash
client> nc 10.240.30.3 5000
Simple file server
while true ; do nc -l 5000 | tar -xvf - ; done
Simple minimal HTTP Server
while true ; do nc -l -p 1500 -c 'echo -e "HTTP/1.1 200 OK\n\n $(date)"' ; done
Simple HTTP Server

Restarts web server after each request - remove while condition for only single connection.

cat > index.html << __EOF__
<!doctype html>
    <head>
        <meta charset="utf-8">
        <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
        <title></title>
        <meta name="description" content="">
        <meta name="viewport" content="width=device-width, initial-scale=1">
    </head>
    <body>

    <p>

      Hello! It's a site.

    </p>

    </body>
</html>
__EOF__
server> while : ; do \
(echo -ne "HTTP/1.1 200 OK\r\nContent-Length: $(wc -c <index.html)\r\n\r\n" ; cat index.html;) | \
nc -l -p 5000 \
; done
  • -p - port number
Simple HTTP Proxy (single connection)
#!/usr/bin/env bash

if [[ $# != 2 ]] ; then
  printf "%s\\n" \
         "usage: ./nc-proxy listen-port bk_host:bk_port"
fi

_listen_port="$1"
_bk_host=$(echo "$2" | cut -d ":" -f1)
_bk_port=$(echo "$2" | cut -d ":" -f2)

printf "  lport: %s\\nbk_host: %s\\nbk_port: %s\\n\\n" \
       "$_listen_port" "$_bk_host" "$_bk_port"

_tmp=$(mktemp -d)
_back="$_tmp/pipe.back"
_sent="$_tmp/pipe.sent"
_recv="$_tmp/pipe.recv"

trap 'rm -rf "$_tmp"' EXIT

mkfifo -m 0600 "$_back" "$_sent" "$_recv"

sed "s/^/=> /" <"$_sent" &
sed "s/^/<=  /" <"$_recv" &

nc -l -p "$_listen_port" <"$_back" | \
tee "$_sent" | \
nc "$_bk_host" "$_bk_port" | \
tee "$_recv" >"$_back"
server> chmod +x nc-proxy && ./nc-proxy 8080 192.168.252.10:8000
  lport: 8080
bk_host: 192.168.252.10
bk_port: 8000

client> http -p h 10.240.30.3:8080
HTTP/1.1 200 OK
Accept-Ranges: bytes
Cache-Control: max-age=31536000
Content-Length: 2748
Content-Type: text/html; charset=utf-8
Date: Sun, 01 Jul 2018 20:12:08 GMT
Last-Modified: Sun, 01 Apr 2018 21:53:37 GMT
Create a single-use TCP or UDP proxy
### TCP -> TCP
nc -l -p 2000 -c "nc [ip|hostname] 3000"

### TCP -> UDP
nc -l -p 2000 -c "nc -u [ip|hostname] 3000"

### UDP -> UDP
nc -l -u -p 2000 -c "nc -u [ip|hostname] 3000"

### UDP -> TCP
nc -l -u -p 2000 -c "nc [ip|hostname] 3000"

Tool: socat
Testing remote connection to port
socat - TCP4:10.240.30.3:22
  • - - standard input (STDIO)
  • TCP4:<params> - set tcp4 connection with specific params
    • [hostname|ip] - set hostname/ip
    • [1-65535] - set port number
Redirecting TCP-traffic to a UNIX domain socket under Linux
socat TCP-LISTEN:1234,bind=127.0.0.1,reuseaddr,fork,su=nobody,range=127.0.0.0/8 UNIX-CLIENT:/tmp/foo
  • TCP-LISTEN:<params> - set tcp listen with specific params
    • [1-65535] - set port number
    • bind=[hostname|ip] - set bind hostname/ip
    • reuseaddr - allows other sockets to bind to an address
    • fork - keeps the parent process attempting to produce more connections
    • su=nobody - set user
    • range=[ip-range] - ip range
  • UNIX-CLIENT:<params> - communicates with the specified peer socket
    • filename - define socket

Tool: lsof
Show process that use internet connection at the moment
lsof -P -i -n
Show process that use specific port number
lsof -i tcp:443
Lists all listening ports together with the PID of the associated process
lsof -Pan -i tcp -i udp
List all open ports and their owning executables
lsof -i -P | grep -i "listen"
Show all open ports
lsof -Pnl -i
Show open ports (LISTEN)
lsof -Pni4 | grep LISTEN | column -t
List all files opened by a particular command
lsof -c "process"
View user activity per directory
lsof -u username -a +D /etc
Show 10 Largest Open Files
lsof / | \
awk '{ if($7 > 1048576) print $7/1048576 "MB" " " $9 " " $1 }' | \
sort -n -u | tail | column -t

Tool: netstat
Graph # of connections for each hosts
netstat -an | \
grep ESTABLISHED | \
awk '{print $5}' | \
awk -F: '{print $1}' | \
grep -v -e '^[[:space:]]*$' | \
sort | uniq -c | \
awk '{ printf("%s\t%s\t",$2,$1) ; for (i = 0; i < $1; i++) {printf("*")}; print "" }'
Monitor open connections for specific port including listen, count and sort it per IP
watch "netstat -plan | grep :443 | awk {'print \$5'} | cut -d: -f 1 | sort | uniq -c | sort -nk 1"

Tool: rsync
Rsync remote data as root using sudo
rsync --rsync-path 'sudo rsync' username@hostname:/path/to/dir/ /local/

Tool: host
Resolves the domain name (using external dns server)
host google.com 9.9.9.9
Checks the domain administrator (SOA record)
host -t soa google.com 9.9.9.9

Tool: dig
Resolves the domain name (short output)
dig google.com +short
Lookup NS record for specific domain
dig @9.9.9.9 google.com NS
Query only answer section
dig google.com +nocomments +noquestion +noauthority +noadditional +nostats
Query ALL DNS Records
dig google.com ANY +noall +answer
DNS Reverse Look-up
dig -x 172.217.16.14 +short

Tool: network-other
Get all subnets for specific AS (Autonomous system)
AS="AS32934"
whois -h whois.radb.net -- "-i origin ${AS}" | \
grep "^route:" | \
cut -d ":" -f2 | \
sed -e 's/^[ \t]//' | \
sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 | \
cut -d ":" -f2 | \
sed -e 's/^[ \t]/allow /' | \
sed 's/$/;/' | \
sed 's/allow  */subnet -> /g'
Resolves domain name from dns.google.com with curl and jq
_dname="google.com" ; curl -s "https://dns.google.com/resolve?name=${_dname}&type=A" | jq .

Programming

Tool: awk
Remove duplicate entries in a file without sorting
awk '!x[$0]++' filename
Exclude multiple columns using AWK
awk '{$1=$3=""}1' filename

Tool: sed
To print a specific line from a file
sed -n 10p /path/to/file
Remove a specific line from a file
sed -i 10d /path/to/file
# alternative (BSD): sed -i'' 10d /path/to/file
Remove a range of lines from a file
sed -i <file> -re '<start>,<end>d'

Tool: grep
Search for a “pattern” inside all files in the current directory
grep -rn "pattern"
grep -RnisI "pattern" *
fgrep "pattern" * -R
Remove blank lines from a file and save output to new file
grep . filename > newfilename
Except multiple patterns
grep -vE '(error|critical|warning)' filename
Show data from file without comments
grep -v ^[[:space:]]*# filename
Show data from file without comments and new lines
egrep -v '#|^$' filename