Thursday, October 6, 2022

Online openssl private certificate and key with alternative DNS

Openssl added a nice alternative to the config file or extention to create requests with alternative DNS. This will create a key and certificate (not certificate request) with two additional DNS alt1.example.net and alt2.example.net

sudo openssl req -x509 -nodes -days 3650 -newkey rsa:4096 -keyout mykey.key -out mycer.crt  -subj '/CN=main.example.net' -addext 'subjectAltName=DNS:alt1.example.net,DNS:alt2.example.net'


Wednesday, December 29, 2021

Victoria metrics on Aws EC2 instance

Will configure one single EC2 instance as a Victoria Metrics server to be used as Promethues storage.

The access to VM(victoria metrics) is done via port 8247 and is protected by http basic auth. All traffic is encrypted with a self sign certificate.

Installation

Will install manually by downloading the releases from github and configure the local system.

Download binaries

# create a group and user for vm
$ sudo groupadd -r victoriametrics
$ sudo useradd -g victoriametrics victoriametrics
 
# download
$ curl -L https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v1.70.0/victoria-metrics-amd64-v1.70.0.tar.gz --output victoria-metrics-amd64-v1.70.0.tar.gz

# unpack and install it
$ sudo tar xvf victoria-metrics-amd64-v1.70.0.tar.gz -C /usr/local/bin/
$ chown root:root /usr/local/bin/victoria-metrics-prod

# create data directory
$ sudo mkdir /var/lib/victoria-metrics-data
$ chown -v victoriametrics:victoriametrics /var/lib/victoria-metrics-data

Configure the service

cat >> /etc/systemd/system/victoriametrics.service <<EOF
[Unit]
Description=High-performance, cost-effective and scalable time series database, long-term remote storage for Prometheus
After=network.target

[Service]
Type=simple
User=victoriametrics
Group=victoriametrics
StartLimitBurst=5
StartLimitInterval=0
Restart=on-failure
RestartSec=1
ExecStart=/usr/local/bin/victoria-metrics-prod \
        -storageDataPath=/var/lib/victoria-metrics-data \
        -httpListenAddr=127.0.0.1:8428 \
        -retentionPeriod=1
ExecStop=/bin/kill -s SIGTERM $MAINPID
LimitNOFILE=65536
LimitNPROC=32000

[Install]
WantedBy=multi-user.target

EOF

At this point your can start the service systemctl enable victoriametrics.service --now, however the port 8428 is not protected in any way nor is encrypted so will add basic authentication and tls encryption with a self sign certificate, any valid certificate will work however. Note that listens only on localhost.

Vmauth

To protect the service will use vmauth which is part of a tool set released by victoria metrics.

# download and install the vm utils

$ curl -L https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v1.70.0/vmutils-amd64-v1.70.0.tar.gz --output vmutils-amd64-v1.70.0.tar.gz
$ sudo tar xvf vmutils-amd64-v1.70.0.tar.gz -C /usr/local/bin/
$ chown -v root:root /usr/local/bin/vm*-prod
Configure vmauth

Create a config file (config.yml) to enable basic authentication.

The format of the file is simple, you need a username and a password.

$ sudo mkdir -p /etc/victoriametrics/ssl/
$ sudo chown -vR victoriametrics:victoriametrics /etc/victoriametrics
$ sudo touch /etc/victoriametrics/config.yml
$ sudo chown -v victoriametrics:victoriametrics /etc/victoriametrics/config.yml

# generate a password for our user
$ python3  -c 'import secrets; print(secrets.token_urlsafe())'
KGKK_NoiciEMn6KdBk6CkcLHZt6TpB-Cgt12UFqnutU

# wite the config
$ sudo cat >> /etc/victoriametrics/config.yml <<EOF
> users:
>   - username: "user1"
>     password: "KGKK_NoiciEMn6KdBk6CkcLHZt6TpB-Cgt12UFqnutU"
>     url_prefix: "http://127.0.0.1:8428"
> # end config
> EOF
Install a self sign certificate
$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout /etc/victoriametrics/ssl/victoriametrics.key -out /etc/victoriametrics/ssl/victoriametrics.crt

$ sudo chown -Rv victoriametrics:victoriametrics /etc/victoriametrics/ssl/
Enable vmauth service
cat >> /etc/systemd/system/vmauth.service <<EOF
[Unit]
Description=Simple auth proxy, router and load balancer for VictoriaMetrics
After=network.target

[Service]
Type=simple
User=victoriametrics
Group=victoriametrics
StartLimitBurst=5
StartLimitInterval=0
Restart=on-failure
RestartSec=1
ExecStart=/usr/local/bin/vmauth-prod \
        --tls=true \
        --auth.config=/etc/victoriametrics/config.yml \
        --httpListenAddr=0.0.0.0:8247 \
        --tlsCertFile=/etc/victoriametrics/ssl/victoriametrics.crt \
        --tlsKeyFile=/etc/victoriametrics/ssl/victoriametrics.key \
ExecStop=/bin/kill -s SIGTERM $MAINPID
LimitNOFILE=65536
LimitNPROC=32000

[Install]
WantedBy=multi-user.target


EOF

Start and enable systemctl enable vmauth.service --now .

To test you will need first to construct a base64 string from the username and password you have written into the config.ymlfile.

For example user vmuser and password secret

$ echo -n 'vmuser:secret' | base64
$ dm11c2VyOnNlY3JldA==

# to test vmauth
$ curl -H 'Authorization: Basic dm11c2VyOnNlY3JldA==' --insecure https://localhost:8247/api/v1/query -d 'query={job=~".*"}'

Operations

Snaphots

List what’s available

curl 'https://localhost:8247/snapshot/list'

{"status":"ok","snapshots":["20211227145126-16C1DDB61673BA11"

Create a new snapshot

curl 'https://localhost:8247/snapshot/create'

{"status":"ok","snapshot":"20211227145526-16C1DDB61673BA12"}

List again the snapshots

curl -s 'https://localhost:8247/snapshot/list' | jq .
{
  "status": "ok",
  "snapshots": [
    "20211227145126-16C1DDB61673BA11",
    "20211227145526-16C1DDB61673BA12"
  ]
}

Backups

The snapshots are located on local disk under data path (parameter -storageDataPath=) on my instance it resolves to storageDataPath=/var/lib/victoria-metrics-data/.

The data into snapshots is compressed with Zstandard.

To push the backups to s3 you can use vmbackup.

$ sudo vmbackup-prod -storageDataPath=/var/lib/victoria-metrics-data  -snapshotName=20211227145526-16C1DDB61673BA12 -dst=s3://BUCKET-NAME/`date +%s`

...

2021-12-29T16:07:20.571Z        info    VictoriaMetrics/app/vmbackup/main.go:105        gracefully shutting down http server for metrics at ":8420"
2021-12-29T16:07:20.572Z        info    VictoriaMetrics/app/vmbackup/main.go:109        successfully shut down http server for metrics in 0.001 seconds

For more info you can see vmbackup.

Friday, December 24, 2021

Postgresql locks

Locks in postgres

Find locks

select pid, state, usename, query, query_start 
from pg_stat_activity 
where pid in (
  select pid from pg_locks l 
  join pg_class t on l.relation = t.oid 
  and t.relkind = 'r' 
  where t.relname = 'search_hit'
);

Killing locks

SELECT pg_cancel_backend(PID);

Haproxy socket stats

Enable stats

Reporting is provided if you enable stats into its config.

The setting is described at https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-stats%20enable

In this post I describe how to use the socket type.

Enable the stats socket

I enable it into the global section as so

global

  stats socket /var/lib/haproxy/stats group haproxy mode 664

What this does is:

  • enable the stats socket under /var/lib/haproxy/stats
  • the group owner is haproxy (running haproxy as user haproxy)
  • permissions are rw (user), rw(group), r(others)

Note there is an option admin that will allow to control haproxy but I don’t use it.

Reading stats from socket (netcat)

You need to have installed netcat (nc).

$ echo 'show stat' | nc -U /var/lib/haproxy/stats
# pxname,svname,qcur,qmax,scur,smax,slim,
....
http_frontend,
....

Reading stats from socket (socat)

You need to install socat since is not frequently installed.

To use it

$ echo 'show stat' | socat stdio /var/lib/haproxy/stats
# pxname,svname,qcur,qmax,scur,smax,slim,
....
http_frontend,
....

Friday, December 18, 2020

AWS cli filter for security groups

There are times when I want to see the security groups on an AWS region. Nothing special really you can always use the aws cli :)

But wait ... there is so much output especially if you have many groups and many rules.

So this is a simple way to filter on the following values(you can add more values but is mostly what I use)

  • VPC Id
  • Group Name
  • Group Id

Tools that I use

  • aws cli (you need to install it)
  • jq (available on many linux distros)
  • awk (comes with any linux distro)

This is how you put all together

      
      	$ export GROUP='My SG'
        $ aws ec2 describe-security-groups --filters Name=group-name,Values="$GROUP" --output json| jq '.SecurityGroups[]| .VpcId, .GroupName, .GroupId'|  awk '{printf (NR%3==0) ? $0 "\n" : $0}'| sed -e 's/""/ - /g'
        # this will print
        "vpc-xxxxxx - My SG - sg-yyyy"
        # bonus - you can use a regex for GROUP
        $ export GROUP='My*Prod'
        $ aws ec2 describe-security-groups --filters Name=group-name,Values="$GROUP" --output json| jq '.SecurityGroups[]| .VpcId, .GroupName, .GroupId'|  awk '{printf (NR%3==0) ? $0 "\n" : $0}'| sed -e 's/""/ - /g'
        # this will print
        "vpc-xxxxxx - My Prod - sg-yyyy"
        "vpc-xxxxxx - My deprecated Prod - sg-yyyy"
        "vpc-xxxxxx - My whatever Prod - sg-yyyy"
         
      

Friday, December 20, 2019

Tcpdump on docker interfaces

This post shows how you can inspect docker containers traffic with tcpdump on linux.

First find the docker names and the mac addresses.


bash $ for c in `sudo docker ps| grep -v CON| awk '{print $1}'`; do sudo docker inspect $c| jq ". |map({ (.Name): .NetworkSettings.Networks[].MacAddress })"; done

[
  {
    "/docker-demo_cortex2_1": "02:42:ac:12:00:08"
  }
]
[
  {
    "/docker-demo_consul_1": "02:42:ac:12:00:05"
  }
]
[
  {
    "/docker-demo_prometheus2_1": "02:42:ac:12:00:03"
  }
]
[
  {
    "/docker-demo_cortex3_1": "02:42:ac:12:00:09"
  }
]
[
  {
    "/docker-demo_cortex1_1": "02:42:ac:12:00:06"
  }
]
[
  {
    "/docker-demo_prometheus3_1": "02:42:ac:12:00:04"
  }
]
[
  {
    "/docker-demo_prometheus1_1": "02:42:ac:12:00:02"
  }
]
[
  {
    "/docker-demo_grafana_1": "02:42:ac:12:00:07"
  }
]

I want to inspect on /docker-demo_cortex1_1 so I list the forward table (fdb)

bash $ /sbin/bridge fdb |grep 02:42:ac:12:00:06

02:42:ac:12:00:06 dev vethee0ca4e master br-f9c7e5b79104
This says that the dev `vethee0ca4e` forwards to the master bridge `br-f9c7e5b79104`

List what interfaces are into the system

bash$ sbin/ip link show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6f:ce:6d brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:cf:95:1a:17 brd ff:ff:ff:ff:ff:ff
4: br-f9c7e5b79104: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:ae:f7:a0:c6 brd ff:ff:ff:ff:ff:ff
28: veth47b30a5@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default 
    link/ether b2:23:05:8a:cd:4e brd ff:ff:ff:ff:ff:ff link-netnsid 0
30: veth95ec404@if29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default 
    link/ether ba:41:85:94:67:39 brd ff:ff:ff:ff:ff:ff link-netnsid 1
32: veth246e156@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default 
    link/ether 92:26:8e:09:97:af brd ff:ff:ff:ff:ff:ff link-netnsid 2
34: veth426ba55@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default 
    link/ether 6a:c0:12:86:30:0f brd ff:ff:ff:ff:ff:ff link-netnsid 5
38: veth91e2bee@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default 
    link/ether de:53:75:37:b0:88 brd ff:ff:ff:ff:ff:ff link-netnsid 6
40: veth9199c33@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default 
    link/ether e2:d1:fa:61:83:cd brd ff:ff:ff:ff:ff:ff link-netnsid 3
42: vethdb6a7ca@if41: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default 
    link/ether ea:51:60:cc:6f:e8 brd ff:ff:ff:ff:ff:ff link-netnsid 4
44: vethee0ca4e@if43: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default 
    link/ether ca:b1:72:d1:c7:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 7

As you can see the interface that I want to inspect is listed as 44.

At this point just start a tcpdump on the interface

bash$ sudo tcpdump -nvv -s0 -A -i vethee0ca4e
In case you have multiple bridges configured onto the system it will help to fist find the master bridge you want to find.
bash$ sudo docker network ls

NETWORK ID          NAME                         DRIVER              SCOPE
bedcfa44fe2b        bridge                       bridge              local
f9c7e5b79104        docker-demo_cortex_network   bridge              local
0d3a96789a7f        host                         host                local
1ecffcd51252        none                         null                local

Sunday, April 14, 2019

Making use of Ansible vault from fabric(fabfile)

Ansible provides a convenient solution to encrypt sensitive data such as passwords, secrets, etc. - Ansible Vault. This post shows how to use the ansible vault from Fabric. First you would think why ? First I thought is a crazy idea :) however since I've been using Fabric and Ansible for a long while I said why not - they are both written in python right ?!. So how to use it, you need to have installed Fabric and Ansible obviously. Create a fabfile at the top import a few Ansible modules

from ansible.cli import CLI
from ansible.parsing.vault import VaultLib
from ansible.parsing.dataloader import DataLoader
import yaml
import os

This allows to interface with the VaultLib which in turns will unencrypt the vault. And this is how you use them from a function


def gef_vault_data(vault_pass_file, vault_file):
    secrets = CLI.setup_vault_secrets(
            DataLoader(),
            vault_ids=[],
            vault_password_files=[vault_pass_file])

    v = VaultLib(secrets=secrets)

    data = v.decrypt(open(vault_file, 'rb').read())
    return yaml.load(data)

# in case you keep the password file into your home directory - adjust as required
HOME = os.environ.get("HOME")
VAULT_PASSWORD_FILE = os.path.join(HOME, ".ansible/vault_password_file")

my_vault = get_vault_data(VAULT_PASSWORD_FILE, "/etc/ansible/vault.yml")  

print(my_vault)  # this is the data from the encrypted Ansible vault. 

Monday, May 14, 2018

Python pip install from git with specific revision

There are times when you want to try a specific revision of a package that is under a specific git revision.

The general syntax is

pip install -e git://github.com/{ username }/{ reponame }.git@{ tag name }#egg={ desired egg name }
An this is how to install from tag 3.7.0b0 from github via https

# install
pip install git+https://github.com/mongodb/mongo-python-driver.git@3.7.0b0#egg=pymongo

# use pymongo
import pymongo
pymongo.MongoClient()

# MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True)

Tuesday, November 28, 2017

CentOS 7 Postfix relay (gmail)

How to send emails trough a smart relay that uses SASL and TLS

I used:

  • CentOS Linux release 7.3.1611
  • postfix-2.10.1-6.el7.x86_64
The rpm comes from CentOS yum Base.

The setup

File: /etc/postfix/main.cf
This is the main configuration for postfix in regards to how you would like to behave.

smtpd_banner = $myhostname ESMTP $mail_name
biff = no
append_dot_mydomain = no
readme_directory = no
smtpd_tls_session_cache_timeout=3600s
tls_random_source=dev:/dev/urandom
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_tls_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl/password
smtp_use_tls = yes
smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.trust.crt
smtp_tls_loglevel = 1
smtp_tls_security_level = encrypt
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = ${OPTIONAL_HOSTNAME}
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
mydestination = $myhostname localhost.$mydomain
relayhost = [${mail.RELAY}]:587
mynetworks = 127.0.0.0/8
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = localhost
inet_protocols = ipv4

# comment these two when done
debug_peer_list = ${mail.RELAY}
debug_peer_level = 3

File: /etc/postfix/sasl/password
Write into the file the username and password that you use to authenticate.
[${mail.RELAY}]    ${user@domain}:${PASSWORD}  
Once you save the file you need to create the database, in this case it's hash
cd /etc/postfix/salsl && postmap password
At this point restart postfix
systemctl restart postfix

The problem

Since all that is configured is ok ... you would expect that now you can send email however ...
smtp_sasl_authenticate: mail.RELAY[IPV4]:587: SASL mechanisms PLAIN LOGIN
warning: SASL authentication failure: No worthy mechs found
...
send attr reason = SASL authentication failed; cannot authenticate to server mail.RELAY[IPV4]: no mechanism available 
The main problem is that the username and password works fine ... you can test by using telnet
# First compute the base64 encoded string. \0 is a null terminated string
printf '${user@domain}\0${user@domain}\0${PASSWORD}' | base64

# telnet to the smtp relay

telnet ${mail.RELAY}
EHLO ${OPTIONAL_HOSTNAME}
250-server.example.com
250-PIPELINING
250-SIZE 10240000
250-ETRN
250-AUTH DIGEST-MD5 PLAIN CRAM-MD5
250 8BITMIME
AUTH PLAIN ${COMPUTED_STRING_FROM_PRINTF}
235 Authentication successful
So what is not working ?! Based on the errors we've seen postfix complains that there is no worthy mechs ... that may lead you to read more into the source code. Bottom line since Postfix uses Cyrus SASL library as per Postfix documentation you actually need to install cyrus-sasl-lib
yum install -y  cyrus-sasl cyrus-sasl-lib cyrus-sasl-plain

# restart postfix

systemctl restart postfix 
At this point if you keep the debug on you will see
....
smtp_sasl_authenticate: ${mail.RELAY}[${IPV4}]:587: SASL mechanisms PLAIN LOGIN
xsasl_cyrus_client_get_user: ${user@domain}
xsasl_cyrus_client_get_passwd: ${PASSWORD}
...
... 235 2.7.0 Authentication successful
 
Note: all symbols ${} should be replace with your relevant information. The value of myhostname is optional into /etc/postfix/main.cf if not present postfix uses your hostname.

Wednesday, November 1, 2017

Zabbix server under Selinux (Centos 7)

Zabbix server under Selinux (CentOS 7)

When running zabbix server under Selinux out of the box when you start
systemctl start zabbix-server
you will get an error like this into /var/log/zabbix/zabbix_server.log


 using configuration file: /etc/zabbix/zabbix_server.conf
 cannot set resource limit: [13] Permission denied
 cannot disable core dump, exiting...
 Starting Zabbix Server. Zabbix 3.0.12 (revision 73586).

 

The problem is related to zabbix policy under Selinux.

How to Fix it

First as the message says zabbix server needs to set some resource limits.
To do so will need to have permissions from selinux. Run the following to see the error and transform it into a format that selinux can load later.
cat /var/log/audit/audit.log | grep zabbix_server | grep denied | audit2allow -M zabbix_server.limits

Two files are created a .pp and a .pe. The .pe file should have content similar to

 module zabbi_server.limits 1.0;

require {
        type zabbix_t;
        class process setrlimit;
}

#============= zabbix_t ==============
allow zabbix_t self:process setrlimit;

 

Load this policy with semodule -i zabbix_server.limits.pp

At this point zabbix server can be started systemctl start zabbix-server
If you need to connect to a database such as mysql/postgress you will need to allow zabbix server again ... (note: I used mysql/mariadb)

cat /var/log/audit/audit.log | grep zabbix_server | grep denied | audit2allow -M zabbix_server.ports

This will create again two files, the .pe file should look like

module zabbix_server_ports 1.0;

require {
        type mysqld_port_t;
        type zabbix_t;
        class process setrlimit;
        class tcp_socket name_connect;
}

#============= zabbix_t ==============

#!!!! This avc can be allowed using the boolean 'zabbix_can_network'
allow zabbix_t mysqld_port_t:tcp_socket name_connect;

#!!!! This avc is allowed in the current policy
allow zabbix_t self:process setrlimit;

    
As you can see the setrlimits is already present and you will need to allow the socket access.
To do so semodule -i zabbix_server.ports.pp

At this point you have two policies loaded and you should restart zabbix server systemctl restart zabbix-server
Note: This may apply to any other version of Linux distros/versions that use Selinux though I only tried on CentOS 7.

Friday, February 10, 2017

MongoDB shell - query collections with special characters

From time to time I found in MongoDB collections that have characters that get interpreted by the mongo shell in a different way and you can't use it as is.

Some example: If your collection name is Items:SubItems and you try to query as you would normally do


mongos> db.Items:SubItems.findOne()
2017-02-10T14:11:17.305+0000 E QUERY    SyntaxError: Unexpected token :

The 'fix' is to use a special javascript notation - so this will work
mongos> db['Items:SubItems'].stats()
{
... 
}

This is called 'Square bracket notation' in javascript.
See Property_accessors for more info.

Tuesday, December 6, 2016

Password recovery on Zabbix server UI

In case you need it ...

Obtain access to the database for read/write (for mysql this is what you need)

update zabbix.users set passwd=md5('mynewpassword') where alias='Admin';

Wednesday, November 16, 2016

Netcat HTTP server

Netcat is a very versatile program used for network communications - the place to find it is .

Often I need to test different programs with a dummy HTTP server, so using netcat for this is very easy.

Lt's say you want to respond with HTTP code 200 ... this is what you do with netcat into a shell


 nc -k  -lp 9000 -c 'echo "HTTP/1.1 200 OK\nContent-Length:0\nContent-Type: text/html; charset=utf-8"' -vvv -o session.txt

To explain the switches used:
  • -k accept multiple connections, won't stop netcat after first connection(default)
  • -l listen TCP on the all interfaces
  • -p the port number to bind
  • -c 'echo "HTTP/1.1 200 OK\nContent-Length:0\nContent-Type: text/html; charset=utf-8"' is the most interesting one ... this responds back to the client with a minimal http header and sets code 200 OK
  • -vvv verbosity level
  • -o session.txt netcat will write into this file all the input and output
Now you have a dummy http server running on port 9000 that will answer 200 OK ALL the time :)

Monday, March 28, 2016

Backups with Duplicity and Dropbox

Dropbox is a very popular service for file storage, the way the service works will synchronize by default
all your files across your devices. This is important to know since you will be backing up data into
Dropbox and you don't want to download the backups on every device you have connected.

What we want to do is to backup files, encrypt them and send them to Dropbox.
All this is achieved with Duplicity.

This is the setup

  • Linux OS, any distro will work I guess but I tried on Ubuntu 14.04 LTS
  • Dropbox account (going pro or business is recommended since backups will typical grow over 2GB basic account)

To encrypt files you will need GPG, in case you don't have a key on your system
we need to do a bit of work, if you do have a gpg key you can skip the next section.

GPG Setup

In this section will create GPG public key/private keys that will be used to encrypt the data you backup to Dropbox.


# install
$ sudo apt-get install gnupg
#
# check if you have any keys
#
$ gpg --list-keys
# if this is empty than you need to create a set of keys
# follow the wizard to create keys
#
$ gpg --gen-key
gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

gpg: keyring `/home/yourname/.gnupg/secring.gpg' created
Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 
Key does not expire at all
Is this correct? (y/N) y

You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) "

Real name: Your Name
Email address: yourname@gmail.com
Comment: 
You selected this USER-ID:
    "Your Name "

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.


....+++++
..+++++
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
+++++

gpg: checking the trustdb
....

#
#
# At this point the keys are created and saved into your keyring
# list keys
#
#
$ gpg --list-keys
/home/yourname/.gnupg/pubring.gpg
--------------------------------
pub   2048R/999B4B79 2016-03-26
            ^^^^^^^^ /used by duplicity
uid                  Your Name 
sub   2048R/99917D12 2016-03-26 

# Note 999B4B79 which is your keyid

Duplicity install

$ sudo apt-get install duplicity

After installation if you are on Ubuntu 14.04 LTS you will need to apply this patch
http://bazaar.launchpad.net/~ed.so/duplicity/fix.dpbx/revision/965#duplicity/backends/dpbxbackend.py
to /usr/lib/python2.7/dist-packages/duplicity/backends/dpbxbackend.py
If you don't know how to apply the patch is simpler to open the file at line 75 and write the following

 72 def command(login_required=True):
 73     """a decorator for handling authentication and exceptions"""
 74     def decorate(f):
 75         def wrapper(self, *args):
 76             from dropbox import rest  ## line to add
 77             if login_required and not self.sess.is_linked():
 78               log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin)

Dropbox and duplicity setup

You need to have an account first. Open your browser and login.

Backups with duplicity and dropbox

Since this is the first time you run it need to make a authorization token, this is done as follow


$ duplicity --encrypt-key 999B4B79 full SOURCE dpbx:///
------------------------------------------------------------------------
url: https://www.dropbox.com/1/oauth/authorize?oauth_token=TOKEN_HERE
Please authorize in the browser. After you're done, press enter.

Now into your browser authorize the application. This will create an access token into dropbox.
You can see the apps you have going to Security
Should see under Apps linked backend for duplicity
In case you need to know what token is in use you can see it onto you system ~/.dropbox.token_store.txt


Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
GnuPG passphrase: 
Retype passphrase to confirm: 
--------------[ Backup Statistics ]--------------
StartTime 1459031263.59 (Sat Mar 26 18:27:43 2016)
EndTime 1459031263.73 (Sat Mar 26 18:27:43 2016)
ElapsedTime 0.14 (0.14 seconds)
SourceFiles 2
SourceFileSize 1732720 (1.65 MB)
NewFiles 2
NewFileSize 1732720 (1.65 MB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 2
RawDeltaSize 1728624 (1.65 MB)
TotalDestinationSizeChange 388658 (380 KB)
Errors 0
-------------------------------------------------

Backups

When the first full backup finished you can start making incremental backups, list the backups etc.
# list the backup files
duplicity --encrypt-key 999B4B79 list-current-files dpbx:///
#

## Make an incremental backup

duplicity --encrypt-key 999B4B79 incr SOURCE dpbx:///
.....
.....
.....

duplicity --encrypt-key 999B4B79 list-current-files dpbx:///

Troubleshooting

During a backup if you see something like

Attempt 1 failed. NameError: global name 'rest' is not defined
Attempt 2 failed. NameError: global name 'rest' is not defined

See the note about Ubuntu 14.04 because you need to patch the dpbxbackend.py file

Notes

If you use multiple computers and don't want to download from dropbox all
the backups you need to enable selective sync and exclude the Apps/duplicity
folder from Dropbox.
I haven't used duplicity for long time and heard some mix opinions, some say is excellent and some
say has some design flows (didn't checked) where your full backup will be taken after a while even if
you just do incremental. Remains to be seen.
I guess if this doesn't work well I would look into Borg Backup which seems to be the best these days since
has dedup built in and many other features. One thing that doesn't though is many backends as duplicity which
can use pretty much all cloud storage solutions around :).

Wednesday, January 13, 2016

Sublime Text X11 Forward - linux headless

On a newer editors (compared with Vim or Emacs) is Sublime Text.
Has many useful features and is quite popular these days, combined with the vintage_keys enabled (vim emulation) is
quite interesting.

This post shows what I did to have sublime text 3 working on a remote headless linux server, I used CentOS 7.1 installed with the group Base.

Since sublime text needs a display to run you will need to install a few packages.

sudo yum install gtk2
sudo yum install pango
sudo yum install gtk2-devel
sudo yum install dejavu-sans-fonts # or the font of your choice
sudo yum install xorg-x11-xauth

After all these packages are installed the ssh server (sshd for CentOS) needs to have the following settings.

# /etc/ssh/sshd_config

X11Forwarding yes
X11DisplayOffset 10
TCPKeepAlive yes
X11UseLocalhost yes
Restart sshd in case you changed your config file
sudo systemctl restart sshd

I used putty on a windows box so I had to make a small hack.

cd  $HOWE
touch .Xauthority  # empty file
Windows based
Configure putty to enable X11 Forwarding and connect to your server.
One more thing to mention is that if you use Windows than you will need to install a program Xming
After you download run the installer and start the Xming server.
Linux
You will need to run a X server - doesn't matter which one and have X11 forward it into the agent.
# when connect add the -X
ssh -X my_host_with_sublime_installed
# Or you enabled X11Forward into your .ssh/config
# something like this will do
Host *
   ForwardX11 yes


In case that sublime text is not installed, download from their site (is always nice to have a license too), extract
the files, typically you would have a directory called sublime_text_3.
# check first that the display is forward it
$ echo $DISPLAY
localhost:10.0
$ cd  sublime_text_3
$  ./sublime_text --wait
# 
At this point onto your local screen(display) you should see a window pop up with sublime text.

Saturday, August 22, 2015

Vagrant with libvirt(KVM) Ubuntu14

Vagrant doesn't have an official provider for libvirt but there is a plugin that allows to run via libvirt KVM on Linux.

First you would think why not VirtualBox/VmWare etc. - simply because KVM is built in and is very lightweight(especially if you run it on your laptop). Also if you have pre-made virtual machines with kvm you can easily package them as Vagrant boxes.

This is what you need to get started on Ubuntu 14.

Obtain package (could be a different version) wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.4_x86_64.deb
Install package

$ sudo dpkg -i vagrant_1.7.4_x86_64.deb

Install kvm, virt-manager, libvirt and ruby-dev

$ sudo apt-get install ruby-dev
$ sudo apt-get install kvm virt-manager
$ sudo apt-get install libvirt-dev

Remove just in case ruby-libvirt as we need a specific version

$ sudo apt-get remove ruby-libvirt
Instal from gem
$ sudo gem install ruby-libvirt -v '0.5.2'

Install the plugin

$ sudo vagrant plugin install vagrant-libvirt
_Note_: Installed the plugin 'vagrant-libvirt (0.0.30)'!

Thursday, December 18, 2014

Supervisor (python supervisord) email alerts

The program supervisor written in python is used to supervise long running processes. In case that a long running process will stop (crash) supervisor will detect it and will restart it, you will get entries into the log files however unless you have a log aggregation tool or you login into the server or have some other monitoring tool you will not know that your process has crashed.

However there is hope :) - you can setup an event listener into supervisor which can email you in case that a process has exit. To do so you will need to install a python package superlance This is how the setup is done.

# install superlance
$ sudo pip install superlance  # if you don't have pip install try easy_install 

# configure supervisor to send events to crashmail

$ sudo vim /etc/supervisor/supervisord.conf  # change according to your setup

[eventlistener:crashmail]
command=crashmail -a -m root@localhost
events=PROCESS_STATE_EXITED

$ sudo supervisor stop && sudo supervisor start
# done :)

In the example above if a process will crash (exit) an event will be sent to crashmail which in turn will email to root@localhost - of course you can change the email address, crashmail uses actually sendmail to send email (postfix and qmail come with a sendmail like program so no worries).
Also the email alert will be sent out for any program that crashed but if you want to filter out you can choose just the program you want by specifying -p program_name instead if -a, for more info you can see Crashmail section on the superlance docs.

Friday, November 21, 2014

Gitlab(Rails) gem loader error

I was trying to make a simple bash pre-receive hook into Gitlab and got one of this


# pre-receive hook
#!/bin/bash

`knife node show`

# Error
/usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.5/lib/bundler/rubygems_integration.rb:214:in `block in replace_gem': chef is not part of the bundle. Add it to Gemfile. (Gem::LoadError)

Initially I thought I can change the hook to ruby and will fix it but after I tried all 6 ways to
execute a command according to http://tech.natemurray.com/2007/03/ruby-shell-commands.html and no luck I looked further into the gem specs for Rails and it looks like you can't load a gem that is not
declared into the Gemfile for your application.

So - what options you have really ? Install all gems and their dependencies into the Rails application Gemfile just to execute a
command ?! Well there is a different way sudo to the rescue :)


# pre-receive hook
#!/bin/bash

`sudo -u USER_THAT_RUNS_THE_APP knife node show`


# also you need to make sure into sudoers that the USER_THAT_RUNS_THE_APP has the right to execute without tty
Defaults:USER_THAT_RUNS_THE_APP !requiretty

Sunday, September 14, 2014

Vim - find occurrences in files.

Vim is the editor for anybody using the cli on daily bases. One useful feature it has is the find/grep into files. Obviously you can exit or suspend vim and do a find or grep however not many know that vim has this built in. You can simply use vimgrep and the likes - for more info http://vim.wikia.com/wiki/Find_in_files_within_Vim.

Tuesday, March 25, 2014

Vim setup for Chef(Opscode) Cookbooks

I've started programming seriously Chef cookbooks by a while but always felts is something missing ... Well I didn't have

  • jump to definition for any Chef dsl
  • auto completion
  • syntax highlight

Recently I found a solution for this, this is my vim setup(just as easy you can do it in Sublime Text as well) These are the tools in my setup

  • vim
  • vim-chef
  • ripper-tags (by my surprise ctags doesn't work well with ruby files ...)

To setup is as simple as

# vim with pathogen
$ git clone https://github.com/vadv/vim-chef ~/.vim/bundle/vim-chef
$ sudo /opt/chef/embedded/bin/gem install gem-ripper-tags
$ knife cookbook create test_coobook -o .
# create tags - there are better ways to do it - see gem-tags for example
$ ripper-tags -R /opt/chef/embedded/lib/ruby/gems/1.9.1/gems/chef-11.10.4 -f tags
$ ctags -R -f tags_project
vim 
:set tags=tags,tags_project 
# done