Monday, December 12, 2011

EC2 raid10 for mongo db

Running mongo db on a raid10(software raid) into ec2 is done via the ebs volumes. I'll show you how to

  • create the raid10 on 8 ebs volumes
  • (re) start the mdadm on the raid device
  • mount the raid10 device and start using
I didn't use any config files for the raid devices so you will need to know how the devices are mapped and what uuid has the raid10.

Initial Creation of the raid

# you will need to have your ebs volumes attached to the server
mdadm --create --verbose /dev/md0 --level=10 --raid-devices=8 /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq

# now create a file system 
mkfs.xfs /dev/md0

#mount the drive
mount /dev/md0 /mnt/mongo/data


# Obtain information about the array
mdadm --detail /dev/md0 # query detail

/dev/md0:
        Version : 0.90
  Creation Time : Wed Oct 26 19:37:16 2011
     Raid Level : raid10
     Array Size : 104857344 (100.00 GiB 107.37 GB)
  Used Dev Size : 26214336 (25.00 GiB 26.84 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Dec 12 15:56:48 2011
          State : clean
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 64K

           UUID : 144894cd:3b083374:1fa88d23:e4200572
         Events : 0.30

    Number   Major   Minor   RaidDevice State
       0       8      144        0      active sync   /dev/sdj
       1       8      160        1      active sync   /dev/sdk
       2       8      176        2      active sync   /dev/sdl
       3       8      192        3      active sync   /dev/sdm
       4       8      208        4      active sync   /dev/sdn
       5       8      224        5      active sync   /dev/sdo
       6       8      240        6      active sync   /dev/sdp
       7      65        0        7      active sync   /dev/sdq

# note the UUID and the devices
# Start the mongo database
/etc/init.d/mongod start

Shutdown(reboot) the server 

# restart the array device - you need to have the ebs volumes re-attached!
mdadm -Av /dev/md0 --uuid=144894cd:3b083374:1fa88d23:e4200572  /dev/sd*
mdadm: looking for devices for /dev/md0
mdadm: cannot open device /dev/sda1: Device or resource busy
mdadm: /dev/sda1 has wrong uuid.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: cannot open device /dev/sdr: Device or resource busy
mdadm: /dev/sdr has wrong uuid.
mdadm: cannot open device /dev/sds: Device or resource busy
mdadm: /dev/sds has wrong uuid.
mdadm: /dev/sdj is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdk is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdl is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdm is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdn is identified as a member of /dev/md0, slot 4.
mdadm: /dev/sdo is identified as a member of /dev/md0, slot 5.
mdadm: /dev/sdp is identified as a member of /dev/md0, slot 6.
mdadm: /dev/sdq is identified as a member of /dev/md0, slot 7.
mdadm: added /dev/sdk to /dev/md0 as 1
mdadm: added /dev/sdl to /dev/md0 as 2
mdadm: added /dev/sdm to /dev/md0 as 3
mdadm: added /dev/sdn to /dev/md0 as 4
mdadm: added /dev/sdo to /dev/md0 as 5
mdadm: added /dev/sdp to /dev/md0 as 6
mdadm: added /dev/sdq to /dev/md0 as 7
mdadm: added /dev/sdj to /dev/md0 as 0
mdadm: /dev/md0 has been started with 8 drives.

# now you can mount the array
mount /dev/md0 /mnt/mongo/data/

# start the mongo database
/etc/init.d/mongod start

Thursday, November 17, 2011

Apache rewrite rule to redirect to https

Problem - you want to redirect all http traffic to https.
The following rewrite rule will redirect any web site that
you are running so is no need to write the server name.


# into httpd.conf write the following

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

# then restart apache 
# this requires a working version of https on the same web server

Thursday, November 3, 2011

Howto create an AMI from a running instance into Ec2 cli

In order to create an ami from an EC2 running instance you will need.

  • certificate file from your aws account credentials
  • private key for the cerificate file from your aws account credentials(you can download this only at certificate creation)
  • access by ssh to your running instance
  • access key for AWS
  • access secret key for AWS
  • any ec2 tools - I used amitools

# create the bundle under /mnt
ec2-bundle-vol -d /mnt -k /root/key.pem -c /root/cer.pem -u xxxxxxxxxxxx
# xxxxxxxxxxxx is your account number without dashes
ec2-upload-bundle -b YOURBUCKET -m /mnt/image.manifest.xml -a YOUR_ACCESS_KEY -s YOUR_ACCESS_SECRET_KEY
# register the ami so is available 
ec2-register -K /root/key.pem -C /root/cer.pem -n SERVER_NAME YOURBUCKET/image.manifest.xml
# this will respond with something like 
IMAGE   ami-xxxxxxxx

# At this point you can go into the aws console and boot a new instance from the ami you registered.<br />
# to deregister the ami 
ec2-deregister  ami-xxxxxxxx

Wednesday, September 21, 2011

From domU read the xenstore (ec2, linode etc)

In case you wonder what is the dom0 running for your instance/vps this will give you information from xenstore. Taken from a FreeBSD receipe and adapted to linux.

Building & installation
-----------------------

Prerequisites: make, XENHVM or XEN kernel (GENERIC will not work) - all this is already there if you run as pv.

1. wget http://bits.xensource.com/oss-xen/release/4.1.1/xen-4.1.1.tar.gz

2. tar xvfz xen-4.1.1.tar.gz

3. cd xen-4.1.1/tools

4. make -C include

5. cd misc

6. make xen-detect

7. install xen-detect /usr/local/bin

8. cd ../xenstore

9. Build client library and programs:
  make clients

10. Install client library and programs:
  install libxenstore.so.3.0 /usr/local/lib
  install xenstore xenstore-control /usr/local/bin
  cd /usr/local/bin
  ln xenstore xenstore-chmod
  ln xenstore xenstore-exists
  ln xenstore xenstore-list
  ln xenstore xenstore-ls
  ln xenstore xenstore-read
  ln xenstore xenstore-rm
  ln xenstore xenstore-write

(in case that your ld loader doesn't look into /usr/local/lib do this
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib)


Usage
-----

1. Set required environment variable:
  export XENSTORED_PATH=/dev/xen/xenstore -- FreeBSD
  export XENSTORED_PATH=/proc/xen/xenbus -- Linux

2. Now you can do things such as:
  xen-detect
  xenstore-ls device
  xenstore-ls -f /local/domain/0/backend/vif/11/0
  xenstore-read name

Tuesday, September 20, 2011

Ec2 metadata

In case that you are looking for more info while you are into a ec2 instance you can call
from within the instance the api metadata server from ec2.

$ curl http://169.254.169.254/latest/meta-data/
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
mac
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id

Thursday, August 18, 2011

MySQL cluster (ndb engine) setup

Setup Mysql cluster with the NDB engine


Mysql Cluster is a high availability RDBMS that can be
downloaded from mysql.com.
It uses as a storage engine NDB which is an engine developed
by Ericsson based on shared nothing architecture.

The roles into the setup are split as follow:

- mysql server (this is a pure mysql server configured with the
engine NDB)
- data nodes (this is a storage node that runs ndbd daemon)
- management server (this will orchestrate all the actions into
the cluster)

My setup
mgmt_node    : 192.168.149.128
data_node_1  : 192.168.149.130
mysql_server : 192.168.149.131

[Installation]

mgmt_node
- install client and management rpms
128#rpm -ivh MySQL-Cluster-gpl-client-7.1.15-1.rhel5.i386.rpm
128#rpm -ivh MySQL-Cluster-gpl-management-7.1.15-1.rhel5.i386.rpm
128#rpm -ivh MySQL-Cluster-gpl-tools-7.1.15-1.rhel5.i386.rpm

data_node_1
- install storage
130#rpm -ivh MySQL-Cluster-gpl-storage-7.1.15-1.rhel5.i386.rpm

mysql_server
- install mysql server
131#rpm -ivh MySQL-Cluster-gpl-server-7.1.15-1.rhel5.i386.rpm
131#rpm -ivh MySQL-Cluster-gpl-client-7.1.15-1.rhel5.i386.rpm

[Configuration]

mgmt_node
- configure location
128# mkdir /mysql-cluster
- configuration file

128# cat > /mysql-cluster/config.ini << CONFIG
[NDBD DEFAULT]
NoOfReplicas=1
DataMemory=20M
IndexMemory=10M

[TCP DEFAULT]
portnumber=1186

[NDB_MGMD]
hostname=192.168.149.128
datadir=/mysql-cluster

# repeat this with the number of data nodes into the cluster
[NDBD]
hostname=192.168.149.130
datadir=/mysql-cluster/data

[MYSQLD]
hostname=192.168.149.131

CONFIG


data_node_1
- configure location
130# mkdir -p /mysql-cluster/data
- configuration file
130# cat > /etc/my.cnf << CONFIG
[MYSQLD]
ndbcluster
ndb-connectstring=192.168.149.128

[MYSQL_CLUSTER]
ndb-connectstring=192.168.149.128

CONFIG

mysql_server
- configure location
131# mkdir -p /mysql-cluster/data
- configuration file
131# cat > /etc/my.cnf << CONFIG
[MYSQLD]
ndbcluster
ndb-connectstring=192.168.149.128

[MYSQL_CLUSTER]
ndb-connectstring=192.168.149.128

CONFIG


[Startup]

mgmt_node
128#ndb_mgmd --initial -f /mysql-cluster/config.ini
MySQL Cluster Management Server mysql-5.1.56 ndb-7.1.15
2011-08-18 07:09:50 [MgmtSrvr] INFO     -- The default config directory
'/usr/mysql-cluster' does not exist. Trying to create
it...
2011-08-18 07:09:50 [MgmtSrvr] INFO     -- Sucessfully created config
directory
2011-08-18 07:09:50 [MgmtSrvr] WARNING  -- at line 7: [TCP] portnumber is
deprecated

data_node_1
130#ndbd --initial
Unable to connect with connect string: nodeid=0,192.168.149.128:1186
Retrying every 5 seconds. Attempts left: 12 11 10 9 8 7 6 5
2011-08-18 07:11:44 [ndbd] INFO     -- Angel connected to
'192.168.149.128:1186'
2011-08-18 07:11:44 [ndbd] INFO     -- Angel allocated nodeid: 2

mysql_node
131#/etc/init.d/mysql start
Starting MySQL.... SUCCESS!


[Running Operations]

128#ndb_mgm
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     1 node(s)
id=2    @192.168.149.130  (mysql-5.1.56 ndb-7.1.15, Nodegroup: 0, Master)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.149.128  (mysql-5.1.56 ndb-7.1.15)

[mysqld(API)]   1 node(s)
id=3    @192.168.149.131  (mysql-5.1.56 ndb-7.1.15)

Monday, August 15, 2011

Cisco ace - virtualization

Ace is a load balancer from cisco systems - see http://www.cisco.com/en/US/products/ps6906/index.html for details


How to verify the virtualization options:

  show running-config context
  show running-config domain
  show running-config resource-class
  show running-config role

Configure a context
  host1/Admin# config
  (config)#
  host1/Admin(config)# context C1 # Creates a context & enter configuration mode.
  host1/Admin(config-context)  
  host1/Admin(config)# no context C1 # Deletes context
  host1/Admin(config-context)# do copy running-config startup-config # save config
Moving between contexts
  host1/Admin# changeto C1 # Change onto C1 context
  host1/C1#
  host1/C1# exit # exit from context
  show service-policy summary |i IP # will give you what policies are available for that ip
  show probe |i ip|port will tell you what probes are on

Monday, June 13, 2011

Quick start with GlusterFS

The software that www.gluster.org makes allows to have distributed file systems with commodity hardware.


Rpm
===

http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-fuse-3.2.0-1.x86_64.rpm
http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-core-3.2.0-1.x86_64.rpm
http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-rdma-3.2.0-1.x86_64.rpm


Gluster Daemon
==============

/etc/init.d/glusterd start


Firewall
========

iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT
iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24009:24014 -j ACCEPT


Bricks
======

gluster peer probe SERVER_NAME_OR_IP


Volumes
=======

    Creation
    ========
    # replicated (mirror onto 2 hosts)
    gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2

    Start
    =====
    gluster volume start test-volume

    List
    ====
    gluster volume info (all|vol_name)

    Mount
    =====
    mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR




Saturday, May 7, 2011

Ssh execute remote commands

Ssh is a very very useful tool and some tricks with it will help you a lot of time and typing. For example I want to transfer a file remote like my ssh-key file. I have a few options:

  • use scp to transfer the file, then login into the remote system and execute the commands.
  • use a different mechanism to transfer the files(ftp etc), login and execute the commands.
  • do it all in one command ... this is the cool one - see bellow


shell$ cat ~sd/.ssh/id_rsa.pub | ssh root@192.168.0.105 'cat - > .ssh/authorized_keys'
shell$ cat ~sd/.ssh/id_rsa.pub | ssh root@192.168.0.105 'cat - > .ssh/authorized_keys2'

# usually the authorized_keys is a link to authorized_keys2 but in this I just have two separate files.
# as you can see is nothing else to do :)


Thursday, April 28, 2011

Am I hacked ?

You do a ps -ef and you think is all good ... but perhaps what you see is not exactly what is really running ... This is a simple but effective way to compare the running processes reported by ps with what is into /proc

shell$ ps ax | wc -l
shell$ 30
shell$ ls -d /proc/* | grep [0-9]|wc -l
shell$ 31 # there is one extra root kit perhaps :)

Tuesday, April 26, 2011

What happens when you do kill a program in linux ?

I had two simple questions:


  • q1: how to you stop(kill) a program in linux ?

  • a1: i use the kill command as in
    kill 99 or kill -9 99

  • q2: ok ... so what does it really happens ?

  • a2: himm ... good question - well i send a signal to the program via a system call and then the kernel will take care of the rest ... as in will kill the program
    q2.1: himm so how does it kill it ?! what does it really happens
    a2.1: you know what let me think about it ... yeah i didn't look into this - well let me trace it and will find out ...
    shell$ bash &
    [2] 29120
    shell$ strace kill 29120
    execve("/usr/bin/kill", ["kill", "29120"], [/* 23 vars */]) = 0
    brk(0)                                  = 0x8849000
    access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
    open("/etc/ld.so.cache", O_RDONLY)      = 3
    fstat64(3, {st_mode=S_IFREG|0644, st_size=24036, ...}) = 0
    mmap2(NULL, 24036, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7f1e000
    close(3)                                = 0
    open("/lib/libc.so.6", O_RDONLY)        = 3
    read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\340\17K\0004\0\0\0"..., 512) = 512
    fstat64(3, {st_mode=S_IFREG|0755, st_size=1611564, ...}) = 0
    mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f1d000
    mmap2(0x49b000, 1332676, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x110000
    mprotect(0x24f000, 4096, PROT_NONE)     = 0
    mmap2(0x250000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13f) = 0x250000
    mmap2(0x253000, 9668, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x253000
    close(3)                                = 0
    mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f1c000
    set_thread_area({entry_number:-1 -> 6, base_addr:0xb7f1c6c0, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}) = 0
    mprotect(0x250000, 8192, PROT_READ)     = 0
    mprotect(0x497000, 4096, PROT_READ)     = 0
    munmap(0xb7f1e000, 24036)               = 0
    brk(0)                                  = 0x8849000
    brk(0x886a000)                          = 0x886a000
    kill(29120, SIGTERM)                    = 0
    exit_group(0)                           = ?
    
    
    from the second line at the bottom i can see that a kill(PID, SIGTERM) was sent to the process and the return code is 0 (meaning success), but does it really happens into the kernel ?! - it will take me a lot more to explain but I found a good article about it at http://www.ibm.com/developerworks/library/l-linux-process-management/

Submit puzzle to facebook's puzzle master

Facebook runs a robot that takes email attachments and runs them to solve a puzzle that is posted
at http://www.facebook.com/careers/puzzles.php#!/careers/puzzles.php .

This is what I did to submit the hoppity puzle

shell$ echo 15 > file.txt 
shell$ python hoppity.py file.txt 
Hoppity
Hohpop
Hoppity
Hoppity
Hohpop
Hoppity
Hop
shell$ cat hoppity.py
#!/usr/bin/env python


import sys
if len(sys.argv) != 2:
    print 'run it as ', __file__, 'file.txt # file.txt should contain one unsigned int'
    sys.exit(1)

_file=sys.argv[1]

try:
    f = open(_file, 'r')
except IOError, ioe:
    print "file %s does not exist " %  _file
except:
    print "can not open file %s" % _file


no = f.read() # assume ONE uint in _file
max = int(no.strip()) + 1

for i in xrange(1,max):
    if i % 3 == 0 and i % 5 == 0 :
            print 'Hop'
    elif i % 3 == 0: print 'Hoppity'
    elif i % 5 == 0: print 'Hophop'

try:
    f.close()
except:
    pass

to actually submit the program - archive it as

mv hoppity.py hoppity && tar cvfz hoppity.tar.gz hoppity.py # the bot doesn't take the extension so you have to cut it off
and send an email with the archive attached to 1051962371@fb.com

Monday, April 25, 2011

What pid has my shell ?

Sometimes you are logged into a system on different terminals and you want to figure it out what process id you have on a specific terminal. The commands to do so are very simple:


shell$ tty
/dev/pts/1
# i'm on pts/1 meaning remote

shell$ ps -p $$
 PID TTY          TIME CMD
10044 pts/1    00:00:00 bash
# $$ means current process pid

Sunday, April 24, 2011

Tracing a telnet session with strace

Sometime you just need to know if a port is open on a remote system. The simplest way to find out if the port is open is to just telnet into the host and the port number.
This should look like:

shell$ telnet localhost  23
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host: Connection refused

# let's redo it with the strace enabled.
strace -vo strace.telnet telnet localhost  23
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host: Connection refused
# the error is the same but now we do have the strace.telnet file to find more info

shell$ cat strace.telnet
execve("/usr/kerberos/bin/telnet", ["telnet", "localhost", "23"], ["HOSTNAME=localhost.localdomain", "TERM=xterm-color", "SHELL=/bin/bash", "HISTSIZE=1000", "SSH_CLIENT=10.211.55.2 62489 22", "SSH_TTY=/dev/pts/0", "USER=root", "LS_COLORS=no=00:fi=00:di=01;34:l", "MAIL=/var/spool/mail/root", "PATH=/usr/kerberos/sbin:/usr/ker", "INPUTRC=/etc/inputrc", "PWD=/root", "LANG=en_US.UTF-8", "SHLVL=1", "HOME=/root", "LOGNAME=root", "SSH_CONNECTION=10.211.55.2 62489", "LESSOPEN=|/usr/bin/lesspipe.sh %", "G_BROKEN_FILENAMES=1", "_=/usr/bin/strace", "OLDPWD=/usr/src"]) = 0
brk(0)                                  = 0x9c8d000
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY)      = 3
fstat64(3, {st_dev=makedev(3, 1), st_ino=129997, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=56, st_size=26551, st_atime=2011/04/21-14:46:38, st_mtime=2011/04/21-07:09:59, st_ctime=2011/04/21-07:09:59}) = 0
mmap2(NULL, 26551, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7fe5000
close(3)                                = 0
open("/usr/lib/libkrb4.so.2", O_RDONLY) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0pB(\0004\0\0\0"..., 512) = 512
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7fe4000
fstat64(3, {st_dev=makedev(3, 1), st_ino=239248, st_mode=S_IFREG|0755, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=208, st_size=100960, st_atime=2011/04/21-14:46:38, st_mtime=2010/01/12-19:22:52, st_ctime=2011/04/08-04:02:50}) = 0
mmap2(NULL, 117948, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x6d3000
mmap2(0x6ea000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x17) = 0x6ea000
mmap2(0x6eb000, 19644, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x6eb000
close(3)                                = 0
open("/usr/lib/libdes425.so.3", O_RDONLY) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\300\354)\0004\0\0\0"..., 512) = 512
fstat64(3, {st_dev=makedev(3, 1), st_ino=236616, st_mode=S_IFREG|0755, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=32, st_size=12816, st_atime=2011/04/21-14:46:38, st_mtime=2010/01/12-19:22:52, st_ctime=2011/04/08-04:02:41}) = 0
mmap2(NULL, 13868, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x1d9000
mmap2(0x1dc000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2) = 0x1dc000
close(3)                                = 0
.......
.......
fstat64(1, {st_dev=makedev(0, 12), st_ino=2, st_mode=S_IFCHR|0620, st_nlink=1, st_uid=0, st_gid=5, st_blksize=4096, st_blocks=0, st_rdev=makedev(136, 0), st_atime=2011/04/21-14:46:38, st_mtime=2011/04/21-14:46:38, st_ctime=2011/04/21-04:36:50}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7feb000
write(1, "Trying 127.0.0.1...\r\n", 21) = 21
socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 3
setsockopt(3, SOL_IP, IP_TOS, [16], 4)  = 0
connect(3, {sa_family=AF_INET, sin_port=htons(23), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection refused)
write(2, "telnet: connect to address 127.0"..., 57) = 57
close(3)                                = 0
write(2, "telnet: Unable to connect to rem"..., 61) = 61
exit_group(1) 

# as you can see there is a lot of information and some i replaced with ..... 
# the line of interest will be 

connect(3, {sa_family=AF_INET, sin_port=htons(23), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection refused)

# this tells when the socket is actually trying to connect onto the remote host and had a return code of -1, all after it is just output from the telnet program that formats it very carefully.

Thursday, April 21, 2011

Check a linux filesystem with an alternate superblock

A filesystem contains different data structures after is created and one of the most important things that is present is the superblock - because is that important there is more then one superblock.
How to find the superblocks and how to do the filesystem check will be shown bellow.

Since some partitions are labeled will we need to find the associated device of the label. How to look on for the label/device association is to follow.


# this is my /etc/fstab

LABEL=/                 /                       ext3    defaults        1 1
LABEL=/opt              /opt                    ext3    defaults        1 2
LABEL=/tmp              /tmp                    ext3    defaults        1 2
LABEL=/usr              /usr                    ext3    defaults        1 2
LABEL=/home             /home                   ext3    defaults        1 2
LABEL=/logs             /logs                   ext3    defaults        1 2
LABEL=/var              /var                    ext3    defaults        1 2
LABEL=/boot             /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
LABEL=SWAP-sda9         swap                    swap    defaults        0 0

# i want to check the partition with label /opt  
# first step - find what device is associated with the label /opt
# to do this we use the command e2label
# if you have one or two partitions is as simple as running e2label on those
# i have a few partitions so i made a small chained command 

root# for i in `mount | awk '{print $1}' | grep '/'`; do echo -n "$i=" && e2label $i   ;done
/dev/hda5=/
/dev/hda8=/tmp
/dev/hda7=/usr
/dev/hda6=/home
/dev/hda3=/logs
/dev/hda2=/var
/dev/hda1=/boot
/dev/hda10=/opt # this is the one I need

# running dumpe2fs
root# dumpe2fs /dev/hda10 |grep 'Backup superblock'
  Backup superblock at 32768, Group descriptors at 32769-32769
  Backup superblock at 98304, Group descriptors at 98305-98305
  Backup superblock at 163840, Group descriptors at 163841-163841
  Backup superblock at 229376, Group descriptors at 229377-229377
  Backup superblock at 294912, Group descriptors at 294913-294913
  Backup superblock at 819200, Group descriptors at 819201-819201
  Backup superblock at 884736, Group descriptors at 884737-884737
  Backup superblock at 1605632, Group descriptors at 1605633-1605633

# note that you may have a different output
# the number after Backup superblock at is the superblock you want

# run fsck.ext3 or fsck.ext2 ... or any other command for your filesystem (reiser etc)
# first umount the partition

root# umount /opt
# thenn fsck  
root# fsck.ext3  -b 32768 /dev/hda10

# after you are done mount back the partition



Wednesday, April 20, 2011

Transfer files between two host with nc

Problem: you have two hosts that you can access but there is no mechanism
to transfer files between them - no ssh(scp/sftp), no ftp etc.
How to do it ?!

Solution: use nc and tar/dd/echo ...

# transfer by tar of a directory
|destination_host|                     |source_host|
nc -l 9000 | tar xvf -                  tar cvf - /my_dir  | nc destination_host 9000

# we listen on all interfaces          # we tar my_dir to STDOUT(-) and all is piped to 
# on port 9000, all that comes in      # nc that will connect on destination_host on port 9000
# is piped to tar xvf (will extract)   # and will transfer what ever is given
# - means take the STDIN

# transfer by dd of a partition /dev/sda3
|destination_host|                     |source_host|
nc -l 9000 | dd of=/backup_device      dd if=/dev/sda3  | nc destination_host 9000

# we listen on all interfaces          # we dd in /dev/sda3 (reading) all is piped to 
# on port 9000, all that comes in      # nc that will connect on destination_host on port 9000
# is piped to dd and dd will write it  # and will transfer what ever is given
# all to /backup_device

As you can see this becomes very useful because you can open the destination port as you need it
and even transfer from a block device as /dev/sda3 in the example. Once the transfer is done
on the destination host nc stops to listen for the port you asked(there is a switch to make it
to continue listening -k, this won't work -l however).

Wednesday, February 23, 2011

sqlalchemy UUID as primary key

I keep hearing about having uuid as primary keys into your database so I decided to give a try with sqlalchemy and python(of course).

So the plan is to have a table users that has the following fields

id - type UUID - 32
fname - varchar(50)
lname - varchar(50)

the code bellow builds the table for me.

1 from sqlalchemy import Table, Column, Integer, String, Sequence, MetaData, ForeignKey, CHAR
  2 from sqlalchemy.orm import mapper, sessionmaker, scoped_session
  3 from sqlalchemy import create_engine
  4 import uuid
  5 
  6 metadata = MetaData()
  7 
  8 users = Table('users', metadata,
  9               Column('id', CHAR(32), primary_key=True, autoincrement=True),
 10               Column('fname', String(50)),
 11               Column('lname', String(50))
 12              )
 13 # orm
 14 class Users(object):
 15     def __init__(self, fname, lname):
 16         assert isinstance(fname, str), 'fname is not a string'
 17         assert isinstance(lname, str), 'lname is not a string'
 18         self.fname = fname
 19         self.lname = lname
 20 
 21     
 22     
 23 mapper(Users, users, version_id_col=users.c.id, version_id_generator = lambda version:uuid.uuid4().hex)



line of interest will be
4 - import the uuid module (standard with python 2.6 or higher)
23 - the version_id_col=users.c.id and version_id_generator = lambda version:uuid.uuid4().hex)
the explanation for it is http://www.sqlalchemy.org/docs/orm/mapper_config.html?highlight=uuid#sqlalchemy.orm.mapper
basically you map a temporary integer used my sqlaclhemy and then you transform it into the UUID with a 32 chars in hex.

the rest of the program


24 
 25 engine = create_engine('sqlite:///:memory:', echo=True)
 26 Session = scoped_session(sessionmaker(bind=engine))
 27 metadata.drop_all(engine)
 28 metadata.create_all(engine)
 29 # my test 
 30 session = Session()
 31 
 32 
 33 u = Users('s', 'd');
 34 u1 = Users('s1', 'd2');
 35 
 36 session.add_all([u1, u])
 37 session.commit()
 38 
 39 session.query(Users).all()

if uuid are better then integers as primary keys I don't think so - at least with mysql taking
into consideration the following article http://www.mysqlperformanceblog.com/2007/03/13/to-uuid-or-not-to-uuid/
but they 'hide' your data from outside and seem to do the job.

Wednesday, February 16, 2011

MySQLdb (mysql-python) install on OSX 10.6 Snow Leopard (32 bits)

Ok - you have mysql server installed into /usr/local/mysql and you are thinking - yes I can connect from python to it like on my linux box ... but on 10.6 OSX is a bit different.
First a bit of light of what is happening:

  • the python you run from /usr/bin/python is compiled for 64 and 32 bits ! that is a fat binary as is called. do a file /usr/bin/python and you will see something like
    usr/bin/python: Mach-O universal binary with 3 architectures
    /usr/bin/python (for architecture x86_64): Mach-O 64-bit executable x86_64
    /usr/bin/python (for architecture i386): Mach-O executable i386
    /usr/bin/python (for architecture ppc7400): Mach-O executable ppc
    

  • the mysql server that you installed is 32 bits only !

  • the code for MySQLdb can be compiled for either architecture but not two at ones as into the fat binary above


Steps to instal

  • have the mysql server installed - source, archive or dmg - the best location to install is /usr/local/mysql
  • if you use virtual environment it is best to extract the 32 bits version from the fat python into your environment. same goes for 64 bits if you use it.
    to extract do something like this after you have your virtual environment -
    cp /my_virtual/env/bin/python /my_virtual/env/bin/python.fat
    lipo -remove x86_64 /my_virtual/env/bin/python.fat -output /my_virtual/env/bin/python
    
    -- to check if you are using 32 bits
    python
    >>> import sys
    >>> sys.maxint
    2147483647
    
  • install mysql-python wit pip/easy_install or from source

errors you may see and how to solve them
  • >>> import MySQLdb
    Traceback (most recent call last):
      File "", line 1, in 
      File "/Users/silviud/PROGS/PYTHON/Environments/2.6/lib/python2.6/site-packages/MySQLdb/__init__.py", line 19, in 
        import _mysql
    ImportError: dlopen(/Users/silviud/PROGS/PYTHON/Environments/2.6/lib/python2.6/site-packages/_mysql.so, 2): Library not loaded: libmysqlclient.16.dylib
      Referenced from: /Users/silviud/PROGS/PYTHON/Environments/2.6/lib/python2.6/site-packages/_mysql.so
      Reason: image not found
    
    This is because the dynamic loader can not find the library libmysqlclient.16.dylib which is located into /usr/local/mysql/lib - to solve it add this to your .profile file

    export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/usr/local/mysql/lib
    


This is what I have done to make it work !
I've seen other solutions where you would have to choose the python architecture with an environment variable as

export VERSIONER_PYTHON_PREFER_32_BIT=yes
or
to have it system wide available with
defaults write com.apple.versioner.python Prefer-32-Bit -bool yes
but NONE worked for me except what I shown above.
I even tried to load static the mysql library into the mysql-python by changing the site.cfg from the dist but no luck.

In any case I don't suggest you do this for the a system wide installation - use virtual environment!

Tuesday, January 18, 2011

Mysql regex query

Mysql supports regexes into their queries - this is how you would do it.

desc PENDING_ORDER

+-------------+--------------+------+-----+---------+-------+
| Field       | Type         | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+-------+
| CustomerID  | varchar(255) | NO   | PRI | NULL    |       | 
| DateCreated | datetime     | NO   |     | NULL    |       | 
| XML_DATA    | text         | NO   |     | NULL    |       | 
| Status      | varchar(20)  | NO   |     | NULL    |       | 
+-------------+--------------+------+-----+---------+-------+

-- find all values that have a new line after the last digit 
 
select XML_DATA as xml from PENDING_ORDER  where XML_DATA REGEXP '[0-9]+\n';

-- Output

< property>
< name> PartnerCustomerID </name>
< value> 999999579
                        </value>
</property>


Monday, January 17, 2011

rotate nohup out file (nohup.log)

I got this article by posting on serverfault.com on how to rotate the nohup log file.
Basically you connect the output of nohup to a pipe that is redirecting to a file - then the file can me moved around very easily.

mknod /tmp/mypipe p
cat < /tmp/mypipe >/tmp/myreallog
nohup myapplication >/tmp/mypipe


To rotate the log:
mv /tmp/myreallog /tmp/rotatedlog
kill [pid of the cat process]
cat < /tmp/mypipe >/tmp/myreallog