sudo openssl req -x509 -nodes -days 3650 -newkey rsa:4096 -keyout mykey.key -out mycer.crt -subj '/CN=main.example.net' -addext 'subjectAltName=DNS:alt1.example.net,DNS:alt2.example.net'
Thursday, October 6, 2022
Online openssl private certificate and key with alternative DNS
Friday, December 24, 2021
Haproxy socket stats
Enable stats
Reporting is provided if you enable stats
into its config.
The setting is described at https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-stats%20enable
In this post I describe how to use the socket
type.
Enable the stats socket
I enable it into the global
section as so
global stats socket /var/lib/haproxy/stats group haproxy mode 664
What this does is:
- enable the stats socket under
/var/lib/haproxy/stats
- the group owner is haproxy (running haproxy as user haproxy)
- permissions are rw (user), rw(group), r(others)
Note there is an option admin
that will allow to control haproxy but I don’t use it.
Reading stats from socket (netcat)
You need to have installed netcat
(nc).
$ echo 'show stat' | nc -U /var/lib/haproxy/stats # pxname,svname,qcur,qmax,scur,smax,slim, .... http_frontend, ....
Reading stats from socket (socat)
You need to install socat
since is not frequently installed.
To use it
$ echo 'show stat' | socat stdio /var/lib/haproxy/stats # pxname,svname,qcur,qmax,scur,smax,slim, .... http_frontend, ....
Friday, December 18, 2020
AWS cli filter for security groups
There are times when I want to see the security groups on an AWS region. Nothing special really you can always use the aws cli :)
But wait ... there is so much output especially if you have many groups and many rules.
So this is a simple way to filter on the following values(you can add more values but is mostly what I use)
- VPC Id
- Group Name
- Group Id
Tools that I use
- aws cli (you need to install it)
- jq (available on many linux distros)
- awk (comes with any linux distro)
This is how you put all together
$ export GROUP='My SG' $ aws ec2 describe-security-groups --filters Name=group-name,Values="$GROUP" --output json| jq '.SecurityGroups[]| .VpcId, .GroupName, .GroupId'| awk '{printf (NR%3==0) ? $0 "\n" : $0}'| sed -e 's/""/ - /g' # this will print "vpc-xxxxxx - My SG - sg-yyyy" # bonus - you can use a regex for GROUP $ export GROUP='My*Prod' $ aws ec2 describe-security-groups --filters Name=group-name,Values="$GROUP" --output json| jq '.SecurityGroups[]| .VpcId, .GroupName, .GroupId'| awk '{printf (NR%3==0) ? $0 "\n" : $0}'| sed -e 's/""/ - /g' # this will print "vpc-xxxxxx - My Prod - sg-yyyy" "vpc-xxxxxx - My deprecated Prod - sg-yyyy" "vpc-xxxxxx - My whatever Prod - sg-yyyy"
Friday, December 20, 2019
Tcpdump on docker interfaces
This post shows how you can inspect docker containers traffic with tcpdump on linux.
First find the docker names and the mac addresses.
bash $ for c in `sudo docker ps| grep -v CON| awk '{print $1}'`; do sudo docker inspect $c| jq ". |map({ (.Name): .NetworkSettings.Networks[].MacAddress })"; done [ { "/docker-demo_cortex2_1": "02:42:ac:12:00:08" } ] [ { "/docker-demo_consul_1": "02:42:ac:12:00:05" } ] [ { "/docker-demo_prometheus2_1": "02:42:ac:12:00:03" } ] [ { "/docker-demo_cortex3_1": "02:42:ac:12:00:09" } ] [ { "/docker-demo_cortex1_1": "02:42:ac:12:00:06" } ] [ { "/docker-demo_prometheus3_1": "02:42:ac:12:00:04" } ] [ { "/docker-demo_prometheus1_1": "02:42:ac:12:00:02" } ] [ { "/docker-demo_grafana_1": "02:42:ac:12:00:07" } ]
I want to inspect on /docker-demo_cortex1_1 so I list the forward table (fdb)
bash $ /sbin/bridge fdb |grep 02:42:ac:12:00:06 02:42:ac:12:00:06 dev vethee0ca4e master br-f9c7e5b79104This says that the dev `vethee0ca4e` forwards to the master bridge `br-f9c7e5b79104`
List what interfaces are into the system
bash$ sbin/ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:6f:ce:6d brd ff:ff:ff:ff:ff:ff 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:cf:95:1a:17 brd ff:ff:ff:ff:ff:ff 4: br-f9c7e5b79104: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:ae:f7:a0:c6 brd ff:ff:ff:ff:ff:ff 28: veth47b30a5@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default link/ether b2:23:05:8a:cd:4e brd ff:ff:ff:ff:ff:ff link-netnsid 0 30: veth95ec404@if29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default link/ether ba:41:85:94:67:39 brd ff:ff:ff:ff:ff:ff link-netnsid 1 32: veth246e156@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default link/ether 92:26:8e:09:97:af brd ff:ff:ff:ff:ff:ff link-netnsid 2 34: veth426ba55@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default link/ether 6a:c0:12:86:30:0f brd ff:ff:ff:ff:ff:ff link-netnsid 5 38: veth91e2bee@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default link/ether de:53:75:37:b0:88 brd ff:ff:ff:ff:ff:ff link-netnsid 6 40: veth9199c33@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default link/ether e2:d1:fa:61:83:cd brd ff:ff:ff:ff:ff:ff link-netnsid 3 42: vethdb6a7ca@if41: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default link/ether ea:51:60:cc:6f:e8 brd ff:ff:ff:ff:ff:ff link-netnsid 4 44: vethee0ca4e@if43: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default link/ether ca:b1:72:d1:c7:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 7As you can see the interface that I want to inspect is listed as 44.
At this point just start a tcpdump on the interface
bash$ sudo tcpdump -nvv -s0 -A -i vethee0ca4eIn case you have multiple bridges configured onto the system it will help to fist find the master bridge you want to find.
bash$ sudo docker network ls NETWORK ID NAME DRIVER SCOPE bedcfa44fe2b bridge bridge local f9c7e5b79104 docker-demo_cortex_network bridge local 0d3a96789a7f host host local 1ecffcd51252 none null local
Tuesday, November 28, 2017
CentOS 7 Postfix relay (gmail)
How to send emails trough a smart relay that uses SASL and TLS
I used:
- CentOS Linux release 7.3.1611
- postfix-2.10.1-6.el7.x86_64
The setup
File: /etc/postfix/main.cf
This is the main configuration for postfix in regards to how you would like to behave.smtpd_banner = $myhostname ESMTP $mail_name biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_session_cache_timeout=3600s tls_random_source=dev:/dev/urandom smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl/password smtp_use_tls = yes smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.trust.crt smtp_tls_loglevel = 1 smtp_tls_security_level = encrypt smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination myhostname = ${OPTIONAL_HOSTNAME} alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = $myhostname localhost.$mydomain relayhost = [${mail.RELAY}]:587 mynetworks = 127.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = localhost inet_protocols = ipv4 # comment these two when done debug_peer_list = ${mail.RELAY} debug_peer_level = 3
File: /etc/postfix/sasl/password
Write into the file the username and password that you use to authenticate.[${mail.RELAY}] ${user@domain}:${PASSWORD}Once you save the file you need to create the database, in this case it's hash
cd /etc/postfix/salsl && postmap passwordAt this point restart postfix
systemctl restart postfix
The problem
Since all that is configured is ok ... you would expect that now you can send email however ...smtp_sasl_authenticate: mail.RELAY[IPV4]:587: SASL mechanisms PLAIN LOGIN warning: SASL authentication failure: No worthy mechs found ... send attr reason = SASL authentication failed; cannot authenticate to server mail.RELAY[IPV4]: no mechanism availableThe main problem is that the username and password works fine ... you can test by using telnet
# First compute the base64 encoded string. \0 is a null terminated string printf '${user@domain}\0${user@domain}\0${PASSWORD}' | base64 # telnet to the smtp relay telnet ${mail.RELAY} EHLO ${OPTIONAL_HOSTNAME} 250-server.example.com 250-PIPELINING 250-SIZE 10240000 250-ETRN 250-AUTH DIGEST-MD5 PLAIN CRAM-MD5 250 8BITMIME AUTH PLAIN ${COMPUTED_STRING_FROM_PRINTF} 235 Authentication successfulSo what is not working ?! Based on the errors we've seen postfix complains that there is no worthy mechs ... that may lead you to read more into the source code. Bottom line since Postfix uses Cyrus SASL library as per Postfix documentation you actually need to install cyrus-sasl-lib
yum install -y cyrus-sasl cyrus-sasl-lib cyrus-sasl-plain # restart postfix systemctl restart postfixAt this point if you keep the debug on you will see
.... smtp_sasl_authenticate: ${mail.RELAY}[${IPV4}]:587: SASL mechanisms PLAIN LOGIN xsasl_cyrus_client_get_user: ${user@domain} xsasl_cyrus_client_get_passwd: ${PASSWORD} ... ... 235 2.7.0 Authentication successfulNote: all symbols ${} should be replace with your relevant information. The value of myhostname is optional into /etc/postfix/main.cf if not present postfix uses your hostname.
Wednesday, November 1, 2017
Zabbix server under Selinux (Centos 7)
Zabbix server under Selinux (CentOS 7)
When running zabbix server under Selinux out of the box when you
start
systemctl start zabbix-server
you will get an error like
this into /var/log/zabbix/zabbix_server.log
using configuration file: /etc/zabbix/zabbix_server.conf cannot set resource limit: [13] Permission denied cannot disable core dump, exiting... Starting Zabbix Server. Zabbix 3.0.12 (revision 73586).
The problem is related to zabbix policy under Selinux.
How to Fix it
First as the message says zabbix server needs to set some resource limits.
To do so will need to have permissions from selinux. Run the following to see
the error and transform it into a format that selinux can load later.
cat /var/log/audit/audit.log | grep zabbix_server | grep denied | audit2allow -M zabbix_server.limits
Two files are created a .pp and a .pe. The .pe file should have content similar to
module zabbi_server.limits 1.0; require { type zabbix_t; class process setrlimit; } #============= zabbix_t ============== allow zabbix_t self:process setrlimit;
Load this policy with semodule -i zabbix_server.limits.pp
At this point zabbix server can be started systemctl start zabbix-server
If you need to connect to a database such as mysql/postgress you will need to allow zabbix server again ... (note: I used mysql/mariadb)
cat /var/log/audit/audit.log | grep zabbix_server | grep denied | audit2allow -M zabbix_server.ports
This will create again two files, the .pe file should look like
module zabbix_server_ports 1.0; require { type mysqld_port_t; type zabbix_t; class process setrlimit; class tcp_socket name_connect; } #============= zabbix_t ============== #!!!! This avc can be allowed using the boolean 'zabbix_can_network' allow zabbix_t mysqld_port_t:tcp_socket name_connect; #!!!! This avc is allowed in the current policy allow zabbix_t self:process setrlimit;As you can see the setrlimits is already present and you will need to allow the socket access.
To do so
semodule -i zabbix_server.ports.pp
At this point you have two policies loaded and you should restart zabbix server systemctl restart zabbix-server
Note: This may apply to any other version of Linux distros/versions that use Selinux though I only tried on CentOS 7.
Tuesday, December 6, 2016
Password recovery on Zabbix server UI
In case you need it ...
Obtain access to the database for read/write (for mysql this is what you need)
update zabbix.users set passwd=md5('mynewpassword') where alias='Admin';
Wednesday, November 16, 2016
Netcat HTTP server
Netcat is a very versatile program used for network communications - the place to find it is .
Often I need to test different programs with a dummy HTTP server, so using netcat for this is very easy.
Lt's say you want to respond with HTTP code 200 ... this is what you do with netcat into a shell
nc -k -lp 9000 -c 'echo "HTTP/1.1 200 OK\nContent-Length:0\nContent-Type: text/html; charset=utf-8"' -vvv -o session.txtTo explain the switches used:
- -k accept multiple connections, won't stop netcat after first connection(default)
- -l listen TCP on the all interfaces
- -p the port number to bind
- -c 'echo "HTTP/1.1 200 OK\nContent-Length:0\nContent-Type: text/html; charset=utf-8"' is the most interesting one ... this responds back to the client with a minimal http header and sets code 200 OK
- -vvv verbosity level
- -o session.txt netcat will write into this file all the input and output
Monday, March 28, 2016
Backups with Duplicity and Dropbox
Dropbox is a very popular service for file storage, the way the service works will synchronize by default
all your files across your devices. This is important to know since you will be backing up data into
Dropbox and you don't want to download the backups on every device you have connected.
What we want to do is to backup files, encrypt them and send them to Dropbox.
All this is achieved with Duplicity.
This is the setup
- Linux OS, any distro will work I guess but I tried on Ubuntu 14.04 LTS
- Dropbox account (going pro or business is recommended since backups will typical grow over 2GB basic account)
To encrypt files you will need GPG, in case you don't have a key on your system
we need to do a bit of work, if you do have a gpg key you can skip the next section.
GPG Setup
In this section will create GPG public key/private keys that will be used to encrypt the data you backup to Dropbox.
# install $ sudo apt-get install gnupg # # check if you have any keys # $ gpg --list-keys # if this is empty than you need to create a set of keys # follow the wizard to create keys # $ gpg --gen-key gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. gpg: keyring `/home/yourname/.gnupg/secring.gpg' created Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection? 1 RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048) Requested keysize is 2048 bits Please specify how long the key should be valid. 0 = key does not expire= key expires in n days w = key expires in n weeks m = key expires in n months y = key expires in n years Key is valid for? (0) Key does not expire at all Is this correct? (y/N) y You need a user ID to identify your key; the software constructs the user ID from the Real Name, Comment and Email Address in this form: "Heinrich Heine (Der Dichter) " Real name: Your Name Email address: yourname@gmail.com Comment: You selected this USER-ID: "Your Name " Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O You need a Passphrase to protect your secret key. We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. ....+++++ ..+++++ We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. +++++ gpg: checking the trustdb .... # # # At this point the keys are created and saved into your keyring # list keys # # $ gpg --list-keys /home/yourname/.gnupg/pubring.gpg -------------------------------- pub 2048R/999B4B79 2016-03-26 ^^^^^^^^ /used by duplicity uid Your Name sub 2048R/99917D12 2016-03-26 # Note 999B4B79 which is your keyid
Duplicity install
$ sudo apt-get install duplicity
After installation if you are on Ubuntu 14.04 LTS you will need to apply
this patch
http://bazaar.launchpad.net/~ed.so/duplicity/fix.dpbx/revision/965#duplicity/backends/dpbxbackend.py
to /usr/lib/python2.7/dist-packages/duplicity/backends/dpbxbackend.py
If you don't know how to apply the patch is simpler to open the file at line 75 and write the following
72 def command(login_required=True): 73 """a decorator for handling authentication and exceptions""" 74 def decorate(f): 75 def wrapper(self, *args): 76 from dropbox import rest ## line to add 77 if login_required and not self.sess.is_linked(): 78 log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin)
Dropbox and duplicity setup
You need to have an account first. Open your browser and login.
Backups with duplicity and dropbox
Since this is the first time you run it need to make a authorization token, this is done as follow
$ duplicity --encrypt-key 999B4B79 full SOURCE dpbx:/// ------------------------------------------------------------------------ url: https://www.dropbox.com/1/oauth/authorize?oauth_token=TOKEN_HERE Please authorize in the browser. After you're done, press enter.
Now into your browser authorize the application. This will create an access token into dropbox.
You can see the apps you have going to Security
Should see under Apps linked backend for duplicity
In case you need to know what token is in use you can see it onto you system ~/.dropbox.token_store.txt
Local and Remote metadata are synchronized, no sync needed. Last full backup date: none GnuPG passphrase: Retype passphrase to confirm: --------------[ Backup Statistics ]-------------- StartTime 1459031263.59 (Sat Mar 26 18:27:43 2016) EndTime 1459031263.73 (Sat Mar 26 18:27:43 2016) ElapsedTime 0.14 (0.14 seconds) SourceFiles 2 SourceFileSize 1732720 (1.65 MB) NewFiles 2 NewFileSize 1732720 (1.65 MB) DeletedFiles 0 ChangedFiles 0 ChangedFileSize 0 (0 bytes) ChangedDeltaSize 0 (0 bytes) DeltaEntries 2 RawDeltaSize 1728624 (1.65 MB) TotalDestinationSizeChange 388658 (380 KB) Errors 0 -------------------------------------------------
Backups
When the first full backup finished you can start making incremental backups, list the backups etc.# list the backup files duplicity --encrypt-key 999B4B79 list-current-files dpbx:/// # ## Make an incremental backup duplicity --encrypt-key 999B4B79 incr SOURCE dpbx:/// ..... ..... ..... duplicity --encrypt-key 999B4B79 list-current-files dpbx:///
Troubleshooting
During a backup if you see something like
Attempt 1 failed. NameError: global name 'rest' is not defined Attempt 2 failed. NameError: global name 'rest' is not defined
See the note about Ubuntu 14.04 because you need to patch the dpbxbackend.py file
Notes
If you use multiple computers and don't want to download from dropbox all
the backups you need to enable selective sync and exclude the Apps/duplicity
folder from Dropbox.
I haven't used duplicity for long time and heard some mix opinions, some say is excellent and some
say has some design flows (didn't checked) where your full backup will be taken after a while even if
you just do incremental. Remains to be seen.
I guess if this doesn't work well I would look into Borg Backup which seems to be the best these days since
has dedup built in and many other features. One thing that doesn't though is many backends as duplicity which
can use pretty much all cloud storage solutions around :).
Wednesday, January 13, 2016
Sublime Text X11 Forward - linux headless
On a newer editors (compared with Vim or Emacs) is Sublime Text.
Has many useful features and is quite popular these days, combined with the vintage_keys enabled (vim emulation) is
quite interesting.
This post shows what I did to have sublime text 3 working on a remote headless linux server, I used CentOS 7.1 installed with the group Base.
Since sublime text needs a display to run you will need to install a few packages.
sudo yum install gtk2 sudo yum install pango sudo yum install gtk2-devel sudo yum install dejavu-sans-fonts # or the font of your choice sudo yum install xorg-x11-xauth
After all these packages are installed the ssh server (sshd for CentOS) needs to have the following settings.
# /etc/ssh/sshd_config X11Forwarding yes X11DisplayOffset 10 TCPKeepAlive yes X11UseLocalhost yesRestart sshd in case you changed your config file
sudo systemctl restart sshd
I used putty on a windows box so I had to make a small hack.
cd $HOWE touch .Xauthority # empty file
Windows based
Configure putty to enable X11 Forwarding and connect to your server.One more thing to mention is that if you use Windows than you will need to install a program Xming
After you download run the installer and start the Xming server.
Linux
You will need to run a X server - doesn't matter which one and have X11 forward it into the agent.# when connect add the -X ssh -X my_host_with_sublime_installed # Or you enabled X11Forward into your .ssh/config # something like this will do Host * ForwardX11 yes
In case that sublime text is not installed, download from their site (is always nice to have a license too), extract
the files, typically you would have a directory called sublime_text_3.
# check first that the display is forward it $ echo $DISPLAY localhost:10.0 $ cd sublime_text_3 $ ./sublime_text --wait #At this point onto your local screen(display) you should see a window pop up with sublime text.
Thursday, December 18, 2014
Supervisor (python supervisord) email alerts
The program supervisor written in python is used to supervise long running processes. In case that a long running process will stop (crash) supervisor will detect it and will restart it, you will get entries into the log files however unless you have a log aggregation tool or you login into the server or have some other monitoring tool you will not know that your process has crashed.
However there is hope :) - you can setup an event listener into supervisor which can email you in case that a process has exit. To do so you will need to install a python package superlance This is how the setup is done.
# install superlance $ sudo pip install superlance # if you don't have pip install try easy_install # configure supervisor to send events to crashmail $ sudo vim /etc/supervisor/supervisord.conf # change according to your setup [eventlistener:crashmail] command=crashmail -a -m root@localhost events=PROCESS_STATE_EXITED $ sudo supervisor stop && sudo supervisor start # done :)
In the example above if a process will crash (exit) an event will be sent to crashmail which in turn
will email to root@localhost - of course you can change the email address, crashmail uses actually sendmail
to send email (postfix and qmail come with a sendmail like program so no worries).
Also the email alert will be sent out for any program that crashed but if you want to filter out you can choose
just the program you want by specifying -p program_name instead if -a, for more info you can see Crashmail section on the superlance docs.
Monday, October 14, 2013
Vim paste tricks
function! IndentPasteOff() set noai nocin nosi inde= endfunction function! IndentPasteOn() set ai cin si endfunction nmap _0 :call IndentPasteOff()Now you don't want any indent type _0 and to indent again _1. Happy viming!nmap _1 :call IndentPasteOn() " paste on/off set pastetoggle=
Wednesday, February 29, 2012
Tricks with nose and python
Nose is a very useful tool for running unittest in python. These are a few tricks you can use. Bellow is my test file - I called it test.py
# dummy case test
class Test():
def test_algo(self):
assert 0 == 0, '0 is not equal to 0'
def test_failed(self):
print 'this will fail'
assert 1 == 0, '0 is not equal to 1'
def test_fail_inpdb(self):
# div by 0
1/0
Now let's see what is this about
- first I use assert to check if the results match
- based on assert the first function will pass and second will fail
- the last function will trigger and Error not a Failure
# running with pdb so any Error not Failure will drop me into python debugger
$ nosetests --pdb test.py
.> /home/silviud/PROGS/PYTHON/wal/tests/test.py(14)test_fail_inpdb()
-> 1/0
(Pdb) l
9 print 'this will fail'
10 assert 1 == 0, '0 is not equal to 1'
11
12 def test_fail_inpdb(self):
13 # div by 0
14 -> 1/0 #### this is the line that triggers the error
[EOF]
(Pdb) c
EF
======================================================================
ERROR: tests.test.Test.test_fail_inpdb
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/silviud/Environments/2.7/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/silviud/PROGS/PYTHON/wal/tests/test.py", line 14, in test_fail_inpdb
1/0
ZeroDivisionError: integer division or modulo by zero
======================================================================
FAIL: tests.test.Test.test_failed
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/silviud/Environments/2.7/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/silviud/PROGS/PYTHON/wal/tests/test.py", line 10, in test_failed
assert 1 == 0, '0 is not equal to 1'
AssertionError: 0 is not equal to 1
-------------------- >> begin captured stdout << ---------------------
this will fail
--------------------- >> end captured stdout << ----------------------
----------------------------------------------------------------------
Ran 3 tests in 8.453s
FAILED (errors=1, failures=1)
Now let's pretended that I run this regular and is part of my Continuous Integration server which so happen
to be running Jenkins. How can I integrate the python unittests with it ?!
Simple - nose has many plugins and one of them is xunit.
$ nosetests --with-xunit test.py
....
$ cat nosetests.xml
<?xml version="1.0" encoding="UTF-8"?><testsuite name="nosetests" tests="3" errors="1" failures="1" skip="0"><testcase classname="tests.test.Test" name="test_algo" time="0.000" /><testcase classname="tests.test.Test" name="test_fail_inpdb" time="0.000"><error type="exceptions.ZeroDivisionError" message="integer division or modulo by zero"><![CDATA[Traceback (most recent call last):
File "/usr/lib/python2.7/unittest/case.py", line 321, in run
testMethod()
File "/home/silviud/Environments/2.7/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/silviud/PROGS/PYTHON/wal/tests/test.py", line 14, in test_fail_inpdb
1/0
ZeroDivisionError: integer division or modulo by zero
]]></error></testcase><testcase classname="tests.test.Test" name="test_failed" time="0.001"><failure type="exceptions.AssertionError" message="0 is not equal to 1 -------------------- >> begin captured stdout << --------------------- this will fail --------------------- >> end captured stdout << ----------------------"><![CDATA[Traceback (most recent call last):
File "/usr/lib/python2.7/unittest/case.py", line 321, in run
testMethod()
File "/home/silviud/Environments/2.7/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/silviud/PROGS/PYTHON/wal/tests/test.py", line 10, in test_failed
assert 1 == 0, '0 is not equal to 1'
AssertionError: 0 is not equal to 1
-------------------- >> begin captured stdout << ---------------------
this will fail
--------------------- >> end captured stdout << ----------------------
]]></failure></testcase></testsuite>
Just by adding --with-xunit made nose to active the xunit plugin and it generated an xml file into nosetests.xml - this file can be used by Jenkins to take decisions if the build failed or not !
Monday, February 20, 2012
Shell parallel processing
This is the description of the tool from the gnu site - Parallel
GNU parallel is a shell tool for executing jobs in parallel using one
or more computers. A job it can be a single command or a small script
that has to be run for each of the lines in the input. The typical
input is a list of files, a list of hosts, a list of users, a list of
URLs, or a list of tables. A job can also be a command that reads from
a pipe. GNU parallel can then split the input into blocks and pipe a
lock into each command in parallel.
The tool can do many things and has some very useful tools that come with it
see sql and niceload.
Bellow you can see some example on how to use it.
#!/bin/sh
# tail log files on different computers
# create a hosts file with all the computers you want to connect
echo '10.100.218.79' >> host.file
echo '107.22.24.219' >> host.file
cat host.file | parallel ssh {} "tail /var/log/php-fpm/error.log | awk '{print \$1,\$2,\$3,\$4,\$5,\$6}'"
[20-Feb-2012 15:05:05.323178] DEBUG: pid 7812, fpm_pctl_perform_idle_server_maintenance(),
[20-Feb-2012 15:05:06.324028] DEBUG: pid 7812, fpm_pctl_perform_idle_server_maintenance(),
[20-Feb-2012 15:05:07.324877] DEBUG: pid 7812, fpm_pctl_perform_idle_server_maintenance(),
[20-Feb-2012 15:05:08.325727] DEBUG: pid 7812, fpm_pctl_perform_idle_server_maintenance(),
[20-Feb-2012 15:05:09.326568] DEBUG: pid 7812, fpm_pctl_perform_idle_server_maintenance(),
[20-Feb-2012 15:05:10.327418] DEBUG: pid 7812, fpm_pctl_perform_idle_server_maintenance(),
[20-Feb-2012 15:05:11.328265] DEBUG: pid 7812, fpm_pctl_perform_idle_server_maintenance(),
[20-Feb-2012 15:05:12.329118] DEBUG: pid 7812, fpm_pctl_perform_idle_server_maintenance(),
[20-Feb-2012 15:05:13.329960] DEBUG: pid 7812, fpm_pctl_perform_idle_server_maintenance(),
[20-Feb-2012 15:05:14.330806] DEBUG: pid 7812, fpm_pctl_perform_idle_server_maintenance(),
GNU's site has lots more example see Examples
Monday, January 23, 2012
Install cProfile on debian 6.0.3 (squeeze)
So you just seen this when tried to profile something on debian squeeze
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/cProfile.py", line 36, in run
result = prof.print_stats(sort)
File "/usr/lib/python2.6/cProfile.py", line 80, in print_stats
import pstats
ImportError: No module named pstats
Well all is need to fix it is to enable a repository and then install python-profile
echo 'deb http://ftp.ca.debian.org/debian squeeze main non-free' >> /etc/apt/sources.list
# replace .ca. with your country code
apt-get update
aptitude install python-profiler
python
>>> import cProfile
>>> def f():
... print 'called'
...
>>> cProfile.run('f()')
called
3 function calls in 0.000 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 :1(f)
1 0.000 0.000 0.000 0.000 :1()
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Monday, December 12, 2011
EC2 raid10 for mongo db
Running mongo db on a raid10(software raid) into ec2 is done via the ebs volumes. I'll show you how to
- create the raid10 on 8 ebs volumes
- (re) start the mdadm on the raid device
- mount the raid10 device and start using
Initial Creation of the raid
# you will need to have your ebs volumes attached to the server
mdadm --create --verbose /dev/md0 --level=10 --raid-devices=8 /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq
# now create a file system
mkfs.xfs /dev/md0
#mount the drive
mount /dev/md0 /mnt/mongo/data
# Obtain information about the array
mdadm --detail /dev/md0 # query detail
/dev/md0:
Version : 0.90
Creation Time : Wed Oct 26 19:37:16 2011
Raid Level : raid10
Array Size : 104857344 (100.00 GiB 107.37 GB)
Used Dev Size : 26214336 (25.00 GiB 26.84 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Dec 12 15:56:48 2011
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 64K
UUID : 144894cd:3b083374:1fa88d23:e4200572
Events : 0.30
Number Major Minor RaidDevice State
0 8 144 0 active sync /dev/sdj
1 8 160 1 active sync /dev/sdk
2 8 176 2 active sync /dev/sdl
3 8 192 3 active sync /dev/sdm
4 8 208 4 active sync /dev/sdn
5 8 224 5 active sync /dev/sdo
6 8 240 6 active sync /dev/sdp
7 65 0 7 active sync /dev/sdq
# note the UUID and the devices
# Start the mongo database
/etc/init.d/mongod start
Shutdown(reboot) the server
# restart the array device - you need to have the ebs volumes re-attached!
mdadm -Av /dev/md0 --uuid=144894cd:3b083374:1fa88d23:e4200572 /dev/sd*
mdadm: looking for devices for /dev/md0
mdadm: cannot open device /dev/sda1: Device or resource busy
mdadm: /dev/sda1 has wrong uuid.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: cannot open device /dev/sdr: Device or resource busy
mdadm: /dev/sdr has wrong uuid.
mdadm: cannot open device /dev/sds: Device or resource busy
mdadm: /dev/sds has wrong uuid.
mdadm: /dev/sdj is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdk is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdl is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdm is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdn is identified as a member of /dev/md0, slot 4.
mdadm: /dev/sdo is identified as a member of /dev/md0, slot 5.
mdadm: /dev/sdp is identified as a member of /dev/md0, slot 6.
mdadm: /dev/sdq is identified as a member of /dev/md0, slot 7.
mdadm: added /dev/sdk to /dev/md0 as 1
mdadm: added /dev/sdl to /dev/md0 as 2
mdadm: added /dev/sdm to /dev/md0 as 3
mdadm: added /dev/sdn to /dev/md0 as 4
mdadm: added /dev/sdo to /dev/md0 as 5
mdadm: added /dev/sdp to /dev/md0 as 6
mdadm: added /dev/sdq to /dev/md0 as 7
mdadm: added /dev/sdj to /dev/md0 as 0
mdadm: /dev/md0 has been started with 8 drives.
# now you can mount the array
mount /dev/md0 /mnt/mongo/data/
# start the mongo database
/etc/init.d/mongod start
Thursday, November 17, 2011
Apache rewrite rule to redirect to https
Problem - you want to redirect all http traffic to https.
The following rewrite rule will redirect any web site that
you are running so is no need to write the server name.
# into httpd.conf write the following
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
# then restart apache
# this requires a working version of https on the same web server
Thursday, November 3, 2011
Howto create an AMI from a running instance into Ec2 cli
In order to create an ami from an EC2 running instance you will need.
- certificate file from your aws account credentials
- private key for the cerificate file from your aws account credentials(you can download this only at certificate creation)
- access by ssh to your running instance
- access key for AWS
- access secret key for AWS
- any ec2 tools - I used amitools
# create the bundle under /mnt
ec2-bundle-vol -d /mnt -k /root/key.pem -c /root/cer.pem -u xxxxxxxxxxxx
# xxxxxxxxxxxx is your account number without dashes
ec2-upload-bundle -b YOURBUCKET -m /mnt/image.manifest.xml -a YOUR_ACCESS_KEY -s YOUR_ACCESS_SECRET_KEY
# register the ami so is available
ec2-register -K /root/key.pem -C /root/cer.pem -n SERVER_NAME YOURBUCKET/image.manifest.xml
# this will respond with something like
IMAGE ami-xxxxxxxx
# At this point you can go into the aws console and boot a new instance from the ami you registered.<br />
# to deregister the ami
ec2-deregister ami-xxxxxxxx
Wednesday, September 21, 2011
From domU read the xenstore (ec2, linode etc)
In case you wonder what is the dom0 running for your instance/vps this will give you information from xenstore. Taken from a FreeBSD receipe and adapted to linux.
Building & installation
-----------------------
Prerequisites: make, XENHVM or XEN kernel (GENERIC will not work) - all this is already there if you run as pv.
1. wget http://bits.xensource.com/oss-xen/release/4.1.1/xen-4.1.1.tar.gz
2. tar xvfz xen-4.1.1.tar.gz
3. cd xen-4.1.1/tools
4. make -C include
5. cd misc
6. make xen-detect
7. install xen-detect /usr/local/bin
8. cd ../xenstore
9. Build client library and programs:
make clients
10. Install client library and programs:
install libxenstore.so.3.0 /usr/local/lib
install xenstore xenstore-control /usr/local/bin
cd /usr/local/bin
ln xenstore xenstore-chmod
ln xenstore xenstore-exists
ln xenstore xenstore-list
ln xenstore xenstore-ls
ln xenstore xenstore-read
ln xenstore xenstore-rm
ln xenstore xenstore-write
(in case that your ld loader doesn't look into /usr/local/lib do this
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib)
Usage
-----
1. Set required environment variable:
export XENSTORED_PATH=/dev/xen/xenstore -- FreeBSD
export XENSTORED_PATH=/proc/xen/xenbus -- Linux
2. Now you can do things such as:
xen-detect
xenstore-ls device
xenstore-ls -f /local/domain/0/backend/vif/11/0
xenstore-read name
Tuesday, September 20, 2011
Ec2 metadata
In case that you are looking for more info while you are into a ec2 instance you can call
from within the instance the api metadata server from ec2.
$ curl http://169.254.169.254/latest/meta-data/
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
mac
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id