tag:blogger.com,1999:blog-52950346631762686072024-03-18T01:22:26.562-04:00Keep it Simplesilviu dicuhttp://www.blogger.com/profile/12270639167316589111noreply@blogger.comBlogger86125tag:blogger.com,1999:blog-5295034663176268607.post-64489551965079812672023-10-10T11:19:00.004-04:002023-10-10T11:20:53.009-04:00Remove Windows 10 drivers from command lineNormally you would use System Settings -> Apps & features ... but that didn't work for you so this will explain how to
uninstall a driver from command line.
<br/>
First if you searched and tried to remove the files from a location as <span>C:\windows\system32\driverstore\FileRepository\</span> you might
notice that is not possible since you need SYSTEM access.
<br/>
To know what driver you want to remove you will need to list them.
<br/>
Open a command prompt or powershell as Administrator than
<pre>
dism /online /get-drivers /format:table > c:\drivers.txt
</pre>
Open the file c:\drivers.txt and note the <b>Published Name</b> as per
<pre>
Version: 10.0.19041.844
Image Version: 10.0.19045.3448
Obtaining list of 3rd party drivers from the driver store...
Driver packages listing:
-------------- | ----------------------------- | ----- | -------------------- | ---------------------------- | ---------- | ----------------
Published Name | Original File Name | Inbox | Class Name | Provider Name | Date | Version
-------------- | ----------------------------- | ----- | -------------------- | ---------------------------- | ---------- | ----------------
oem77.inf | nxdrv.inf | No | Net | SonicWall | 10/18/2017 | 2.0.6.1
</pre>
Now to remove the driver take note of its Published Name as above list.
<pre>
pnputil.exe /d oem77.inf
</pre>
That's it. <br/>
Other tools that can help are <a href="https://learn.microsoft.com/en-us/sysinternals/downloads/autoruns" target="_blank">https://learn.microsoft.com/en-us/sysinternals/downloads/autoruns</a> though I found that would work most of the time
but in some cases will not be able to remove the driver <i>files</i> but just registry cleanup.silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-7604738138781866602023-01-19T11:26:00.002-05:002023-01-19T11:26:19.170-05:00Tmux cheetsheet<h2>Attach and detach</h2>
<pre><code>$ tmux Start new tmux session
$ tmux attach Attach to tmux session running in the background
Ctrl+B d Detach from tmux session, leaving it running in the background
Ctrl+B & Exit and quit tmux
Ctrl+B ? List all key bindings (press Q to exit help screen)
</code></pre>
<h2>Window management</h2>
<pre><code>Ctrl+B C Create new window If you are running more than one tmux session (more than one
PID), you can switch between the two clients.
Ctrl+B N Move to next window
Ctrl+B P Move to previous window
Ctrl+B L Move to last window
Ctrl+B 0-9 Move to window by index number
Ctrl+B ) Move to next session
Ctrl+B ( Move to previous session
Ctrl+B Ctrl+Z Suspend session
</code></pre>
<h2>Split window into panes</h2>
<pre><code>Ctrl+B % Vertical split (panes side by side)
Ctrl+B " Horizontal split (one pane below the other)
Ctrl+B CTRL+O Interchange pane position
Ctrl+B O Move to other pane
Ctrl+B ! Remove all panes but the current one from the window
Ctrl+B Q Display window index numbers
Ctrl+B Ctrl-Up/Down Resize current pane (due north/south)
Ctrl+B Ctrl-Left/Right Resize current pane (due west/east)
</code></pre>
<h2>Pane related</h2>
<pre><code>
join-pane -s 1 -t 0 -p 20 "Join pane source 1 into pane target 0 with 20% usage"
break pane "remove all other panes like CTRL+B !"
# best to create some key bindings into tmux.conf
# pane movement vertical split
bind-key j command-prompt -p "join pane from:" "join-pane -h -s '%%'"
bind-key s command-prompt -p "send pane to:" "join-pane -h -t '%%'"
# pane movement
bind-key J command-prompt -p "join pane from:" "join-pane -s '%%'"
bind-key S command-prompt -p "send pane to:" "join-pane -t '%%'"
# move panes around
Ctrl+B <space>
</code></pre>
<h2>Copy/Paste</h2>
<pre><code>CTRL+B [ enter copy mode (user arrows or CTRL+F CTRL+B CTRL+B to move)
SHIFT+v to start select
Movement keys to select
ENTER to copy
CTRL+B ] to paste
q to exit from copy mode
</code></pre>
<h2>Misc</h2>
<pre><code>CTRL+B ? List all bindings
For more details - https://github.com/tmux/tmux/blob/master/key-bindings.c#L345
</code></pre>
<p>For all commands see into <code>https://github.com/tmux/tmux/blob/master</code> files that begin with <code>cmd-</code></p>silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-74013971622602295682022-10-06T14:13:00.006-04:002023-01-19T11:19:51.981-05:00Online openssl private certificate and key with alternative DNSOpenssl added a nice alternative to the config file or extention to create requests with alternative DNS.
This will create a key and certificate (not certificate request) with two additional DNS alt1.example.net and alt2.example.net
<pre class="prettyprint">
sudo openssl req -x509 -nodes -days 3650 -newkey rsa:4096 -keyout mykey.key -out mycer.crt -subj '/CN=main.example.net' -addext 'subjectAltName=DNS:alt1.example.net,DNS:alt2.example.net'
</pre>silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-79496882444982761442021-12-29T11:22:00.003-05:002022-10-06T13:34:48.105-04:00Victoria metrics on Aws EC2 instance<p>Will configure one <em>single</em> EC2 instance as a Victoria Metrics server to be used as Promethues storage.</p>
<p>The access to VM(victoria metrics) is done via port <code>8247</code> and is protected by http basic auth. All traffic is
encrypted with a self sign certificate.</p>
<h2>Installation</h2>
<p>Will install manually by downloading the releases from github and configure the local system.</p>
<h4>Download binaries</h4>
<div><pre class="prettyprint"># create a group and user for vm
$ sudo groupadd -r victoriametrics
$ sudo useradd -g victoriametrics victoriametrics
# download
$ curl -L https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v1.70.0/victoria-metrics-amd64-v1.70.0.tar.gz --output victoria-metrics-amd64-v1.70.0.tar.gz
# unpack and install it
$ sudo tar xvf victoria-metrics-amd64-v1.70.0.tar.gz -C /usr/local/bin/
$ chown root:root /usr/local/bin/victoria-metrics-prod
# create data directory
$ sudo mkdir /var/lib/victoria-metrics-data
$ chown -v victoriametrics:victoriametrics /var/lib/victoria-metrics-data
</pre></div>
<h4>Configure the service</h4>
<div><pre class="prettyprint">cat >> /etc/systemd/system/victoriametrics.service <<EOF
[Unit]
Description=High-performance, cost-effective and scalable time series database, long-term remote storage for Prometheus
After=network.target
[Service]
Type=simple
User=victoriametrics
Group=victoriametrics
StartLimitBurst=5
StartLimitInterval=0
Restart=on-failure
RestartSec=1
ExecStart=/usr/local/bin/victoria-metrics-prod \
-storageDataPath=/var/lib/victoria-metrics-data \
-httpListenAddr=127.0.0.1:8428 \
-retentionPeriod=1
ExecStop=/bin/kill -s SIGTERM $MAINPID
LimitNOFILE=65536
LimitNPROC=32000
[Install]
WantedBy=multi-user.target
EOF
</pre></div>
<p>At this point your can start the service <code>systemctl enable victoriametrics.service --now</code>, however the port 8428 is not
protected in any way nor is encrypted so will add basic authentication and tls encryption with a self sign certificate,
any valid certificate will work however. Note that listens only on localhost.</p>
<h5>Vmauth</h5>
<p>To protect the service will use <code>vmauth</code> which is part of a tool set released by victoria metrics.</p>
<div><pre class="prettyprint"># download and install the vm utils
$ curl -L https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v1.70.0/vmutils-amd64-v1.70.0.tar.gz --output vmutils-amd64-v1.70.0.tar.gz
$ sudo tar xvf vmutils-amd64-v1.70.0.tar.gz -C /usr/local/bin/
$ chown -v root:root /usr/local/bin/vm*-prod
</pre></div>
<h6>Configure vmauth</h6>
<p>Create a config file (<code>config.yml</code>) to enable basic authentication. </p>
<p>The format of the file is simple, you need a username and a password.</p>
<div><pre class="prettyprint">$ sudo mkdir -p /etc/victoriametrics/ssl/
$ sudo chown -vR victoriametrics:victoriametrics /etc/victoriametrics
$ sudo touch /etc/victoriametrics/config.yml
$ sudo chown -v victoriametrics:victoriametrics /etc/victoriametrics/config.yml
# generate a password for our user
$ python3 -c 'import secrets; print(secrets.token_urlsafe())'
KGKK_NoiciEMn6KdBk6CkcLHZt6TpB-Cgt12UFqnutU
# wite the config
$ sudo cat >> /etc/victoriametrics/config.yml <<EOF
> users:
> - username: "user1"
> password: "KGKK_NoiciEMn6KdBk6CkcLHZt6TpB-Cgt12UFqnutU"
> url_prefix: "http://127.0.0.1:8428"
> # end config
> EOF
</pre></div>
<h6>Install a self sign certificate</h6>
<div><pre class="prettyprint">$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout /etc/victoriametrics/ssl/victoriametrics.key -out /etc/victoriametrics/ssl/victoriametrics.crt
$ sudo chown -Rv victoriametrics:victoriametrics /etc/victoriametrics/ssl/
</pre></div>
<h6>Enable vmauth service</h6>
<div><pre class="prettyprint">cat >> /etc/systemd/system/vmauth.service <<EOF
[Unit]
Description=Simple auth proxy, router and load balancer for VictoriaMetrics
After=network.target
[Service]
Type=simple
User=victoriametrics
Group=victoriametrics
StartLimitBurst=5
StartLimitInterval=0
Restart=on-failure
RestartSec=1
ExecStart=/usr/local/bin/vmauth-prod \
--tls=true \
--auth.config=/etc/victoriametrics/config.yml \
--httpListenAddr=0.0.0.0:8247 \
--tlsCertFile=/etc/victoriametrics/ssl/victoriametrics.crt \
--tlsKeyFile=/etc/victoriametrics/ssl/victoriametrics.key \
ExecStop=/bin/kill -s SIGTERM $MAINPID
LimitNOFILE=65536
LimitNPROC=32000
[Install]
WantedBy=multi-user.target
EOF
</pre></div>
<p>Start and enable <code>systemctl enable vmauth.service --now</code> .</p>
<p>To test you will need first to construct a base64 string from the username and password you have written into the <code>config.yml</code>file.</p>
<p>For example user <code>vmuser</code> and password <code>secret</code> </p>
<div><pre class="prettyprint">$ echo -n 'vmuser:secret' | base64
$ dm11c2VyOnNlY3JldA==
# to test vmauth
$ curl -H 'Authorization: Basic dm11c2VyOnNlY3JldA==' --insecure https://localhost:8247/api/v1/query -d 'query={job=~".*"}'
</pre></div>
<h2>Operations</h2>
<h3>Snaphots</h3>
<p>List what’s available</p>
<div><pre class="prettyprint">curl 'https://localhost:8247/snapshot/list'
{"status":"ok","snapshots":["20211227145126-16C1DDB61673BA11"
</pre></div>
<p>Create a new snapshot</p>
<div><pre class="prettyprint">curl 'https://localhost:8247/snapshot/create'
{"status":"ok","snapshot":"20211227145526-16C1DDB61673BA12"}
</pre></div>
<p>List again the snapshots</p>
<div><pre class="prettyprint">curl -s 'https://localhost:8247/snapshot/list' | jq .
{
"status": "ok",
"snapshots": [
"20211227145126-16C1DDB61673BA11",
"20211227145526-16C1DDB61673BA12"
]
}
</pre></div>
<h3>Backups</h3>
<p>The snapshots are located on local disk under data path (parameter <code>-storageDataPath=</code>) on my instance
it resolves to <code>storageDataPath=/var/lib/victoria-metrics-data/</code>.</p>
<p>The data into snapshots is compressed with <a href="https://facebook.github.io/zstd/">Zstandard</a>.</p>
<p>To push the backups to s3 you can use <code>vmbackup</code>.</p>
<div><pre class="prettyprint">$ sudo vmbackup-prod -storageDataPath=/var/lib/victoria-metrics-data -snapshotName=20211227145526-16C1DDB61673BA12 -dst=s3://BUCKET-NAME/`date +%s`
...
2021-12-29T16:07:20.571Z info VictoriaMetrics/app/vmbackup/main.go:105 gracefully shutting down http server for metrics at ":8420"
2021-12-29T16:07:20.572Z info VictoriaMetrics/app/vmbackup/main.go:109 successfully shut down http server for metrics in 0.001 seconds
</pre></div>
<p>For more info you can see <a href="https://docs.victoriametrics.com/vmbackup.html">vmbackup</a>.</p>silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-84216014818898683942021-12-24T13:21:00.002-05:002021-12-24T13:21:32.095-05:00Postgresql locks
<h1>Locks in postgres</h1>
<h2>Find locks</h2>
<div><pre class="prettyprint">select pid, state, usename, query, query_start
from pg_stat_activity
where pid in (
select pid from pg_locks l
join pg_class t on l.relation = t.oid
and t.relkind = 'r'
where t.relname = 'search_hit'
);
</pre></div>
<h2>Killing locks</h2>
<div><pre class="prettyprint">SELECT pg_cancel_backend(PID);
</pre></div>silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-72417228095637542562021-12-24T10:41:00.000-05:002021-12-24T10:41:09.718-05:00Haproxy socket stats <h2>Enable stats<a href="#enable-stats" title="Permanent link"></a></h2>
<p>Reporting is provided if you <code>enable stats</code> into its config.</p>
<p>The setting is described at <a href="https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-stats%20enable">https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-stats%20enable</a></p>
<p>In this post I describe how to use the <code>socket</code> type.</p>
<h3>Enable the stats socket<a href="#enable-the-stats-socket" title="Permanent link"></a></h3>
<p>I enable it into the <code>global</code> section as so</p>
<div><pre class="prettyprint">global
stats socket /var/lib/haproxy/stats group haproxy mode 664
</pre></div>
<p>What this does is:</p>
<ul>
<li>enable the stats socket under <code>/var/lib/haproxy/stats</code></li>
<li>the group owner is haproxy (running haproxy as user haproxy)</li>
<li>permissions are rw (user), rw(group), r(others)</li>
</ul>
<p><strong>Note</strong> there is an option <code>admin</code> that will allow to control haproxy but I don’t use it.</p>
<h3>Reading stats from socket (netcat)<a href="#reading-stats-from-socket-netcat" title="Permanent link"></a></h3>
<p>You need to have installed <code>netcat</code> (nc).</p>
<div><pre class="prettyprint"><span>$ </span><span>echo</span> <span>'show stat'</span> <span>|</span> nc -U /var/lib/haproxy/stats
<span># pxname,svname,qcur,qmax,scur,smax,slim,</span>
....
http_frontend,
....
</pre></div>
<h3>Reading stats from socket (socat)<a href="#reading-stats-from-socket-socat" title="Permanent link"></a></h3>
<p>You need to install <code>socat</code> since is not frequently installed.</p>
<p>To use it</p>
<div><pre class="prettyprint"><span>$ </span><span>echo</span> <span>'show stat'</span> <span>|</span> socat stdio /var/lib/haproxy/stats
<span># pxname,svname,qcur,qmax,scur,smax,slim,</span>
....
http_frontend,
....
</pre></div>silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-65907098726789913812020-12-18T12:10:00.000-05:002020-12-18T12:10:33.219-05:00AWS cli filter for security groups<p>There are times when I want to see the security groups on an AWS region. Nothing special really you can always use the aws cli :)</p>
<p>But wait ... there is so much output especially if you have many groups and many rules.</p>
<p>So this is a simple way to filter on the following values(you can add more values but is mostly what I use)
<ul>
<li>VPC Id</li>
<li>Group Name</li>
<li>Group Id</li>
</ul>
</p>
<p>
Tools that I use
<ul>
<li>aws cli (you need to install it)</li>
<li> jq (available on many linux distros)</li>
<li>awk (comes with any linux distro)</li>
</ul
</p>
<p>This is how you put all together</p>
<p>
<pre class="prettyprint">
$ export GROUP='My SG'
$ aws ec2 describe-security-groups --filters Name=group-name,Values="$GROUP" --output json| jq '.SecurityGroups[]| .VpcId, .GroupName, .GroupId'| awk '{printf (NR%3==0) ? $0 "\n" : $0}'| sed -e 's/""/ - /g'
# this will print
"vpc-xxxxxx - My SG - sg-yyyy"
# bonus - you can use a regex for GROUP
$ export GROUP='My*Prod'
$ aws ec2 describe-security-groups --filters Name=group-name,Values="$GROUP" --output json| jq '.SecurityGroups[]| .VpcId, .GroupName, .GroupId'| awk '{printf (NR%3==0) ? $0 "\n" : $0}'| sed -e 's/""/ - /g'
# this will print
"vpc-xxxxxx - My Prod - sg-yyyy"
"vpc-xxxxxx - My deprecated Prod - sg-yyyy"
"vpc-xxxxxx - My whatever Prod - sg-yyyy"
</pre>
</p>
silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-68790773571478791232019-12-20T16:20:00.002-05:002019-12-20T16:20:54.554-05:00Tcpdump on docker interfaces<p> This post shows how you can inspect docker containers traffic with tcpdump on linux.
</p>
<p>
First find the docker names and the mac addresses.
<pre class="prettyprint prettyprinted">
bash $ for c in `sudo docker ps| grep -v CON| awk '{print $1}'`; do sudo docker inspect $c| jq ". |map({ (.Name): .NetworkSettings.Networks[].MacAddress })"; done
[
{
"/docker-demo_cortex2_1": "02:42:ac:12:00:08"
}
]
[
{
"/docker-demo_consul_1": "02:42:ac:12:00:05"
}
]
[
{
"/docker-demo_prometheus2_1": "02:42:ac:12:00:03"
}
]
[
{
"/docker-demo_cortex3_1": "02:42:ac:12:00:09"
}
]
[
{
"/docker-demo_cortex1_1": "02:42:ac:12:00:06"
}
]
[
{
"/docker-demo_prometheus3_1": "02:42:ac:12:00:04"
}
]
[
{
"/docker-demo_prometheus1_1": "02:42:ac:12:00:02"
}
]
[
{
"/docker-demo_grafana_1": "02:42:ac:12:00:07"
}
]
</pre>
</p>
<p>
I want to inspect on /docker-demo_cortex1_1 so I list the forward table (fdb)
<pre class="prettyprint prettyprinted">
bash $ /sbin/bridge fdb |grep 02:42:ac:12:00:06
02:42:ac:12:00:06 dev vethee0ca4e master br-f9c7e5b79104
</pre>
This says that the dev `vethee0ca4e` forwards to the master bridge `br-f9c7e5b79104`
</p>
<p>
List what interfaces are into the system
<pre class="prettyprint prettyprinted">
bash$ sbin/ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:6f:ce:6d brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:cf:95:1a:17 brd ff:ff:ff:ff:ff:ff
4: br-f9c7e5b79104: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:ae:f7:a0:c6 brd ff:ff:ff:ff:ff:ff
28: veth47b30a5@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default
link/ether b2:23:05:8a:cd:4e brd ff:ff:ff:ff:ff:ff link-netnsid 0
30: veth95ec404@if29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default
link/ether ba:41:85:94:67:39 brd ff:ff:ff:ff:ff:ff link-netnsid 1
32: veth246e156@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default
link/ether 92:26:8e:09:97:af brd ff:ff:ff:ff:ff:ff link-netnsid 2
34: veth426ba55@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default
link/ether 6a:c0:12:86:30:0f brd ff:ff:ff:ff:ff:ff link-netnsid 5
38: veth91e2bee@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default
link/ether de:53:75:37:b0:88 brd ff:ff:ff:ff:ff:ff link-netnsid 6
40: veth9199c33@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default
link/ether e2:d1:fa:61:83:cd brd ff:ff:ff:ff:ff:ff link-netnsid 3
42: vethdb6a7ca@if41: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default
link/ether ea:51:60:cc:6f:e8 brd ff:ff:ff:ff:ff:ff link-netnsid 4
44: vethee0ca4e@if43: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-f9c7e5b79104 state UP mode DEFAULT group default
link/ether ca:b1:72:d1:c7:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 7
</pre>
As you can see the interface that I want to inspect is listed as 44.
</p>
<p>
At this point just start a tcpdump on the interface
<pre class="prettyprint prettyprinted">
bash$ sudo tcpdump -nvv -s0 -A -i vethee0ca4e
</pre>
In case you have multiple bridges configured onto the system it will help to fist find the master bridge you want to find.
<pre class="prettyprint prettyprinted">
bash$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
bedcfa44fe2b bridge bridge local
f9c7e5b79104 docker-demo_cortex_network bridge local
0d3a96789a7f host host local
1ecffcd51252 none null local
</pre>
</p>silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-24663517819382105452019-04-14T07:49:00.003-04:002019-04-14T07:55:41.727-04:00Making use of Ansible vault from fabric(fabfile)Ansible provides a convenient solution to encrypt sensitive data such as passwords, secrets, etc. - <a href="https://docs.ansible.com/ansible/latest/user_guide/vault.html">Ansible Vault</a>.
This post shows how to use the ansible vault from <a href="http://www.fabfile.org/">Fabric</a>.
First you would think why ? First I thought is a crazy idea :) however since I've been using Fabric and Ansible for a long while
I said why not - they are both written in python right ?!.
So how to use it, you need to have installed Fabric and Ansible obviously.
Create a <b>fabfile</b> at the top import a few Ansible modules
<pre class="prettyprint prettyprinted" style>
from ansible.cli import CLI
from ansible.parsing.vault import VaultLib
from ansible.parsing.dataloader import DataLoader
import yaml
import os
</pre>
This allows to interface with the <b>VaultLib</b> which in turns will unencrypt the vault.
And this is how you use them from a function
<pre class="prettyprint prettyprinted" style>
def gef_vault_data(vault_pass_file, vault_file):
secrets = CLI.setup_vault_secrets(
DataLoader(),
vault_ids=[],
vault_password_files=[vault_pass_file])
v = VaultLib(secrets=secrets)
data = v.decrypt(open(vault_file, 'rb').read())
return yaml.load(data)
# in case you keep the password file into your home directory - adjust as required
HOME = os.environ.get("HOME")
VAULT_PASSWORD_FILE = os.path.join(HOME, ".ansible/vault_password_file")
my_vault = get_vault_data(VAULT_PASSWORD_FILE, "/etc/ansible/vault.yml")
print(my_vault) # this is the data from the encrypted Ansible vault.
</pre>
silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-5312598267628643682018-05-14T09:31:00.001-04:002018-05-14T09:31:26.806-04:00Python pip install from git with specific revision <p>
There are times when you want to try a specific revision of a package that is under a specific git revision.
</p>
<p>
The general syntax is
<pre>
pip install -e git://github.com/{ username }/{ reponame }.git@{ tag name }#egg={ desired egg name }
</pre>
An this is how to install from tag <i>3.7.0b0</i> from github via <i>https</i>
<pre>
# install
pip install git+https://github.com/mongodb/mongo-python-driver.git@3.7.0b0#egg=pymongo
# use pymongo
import pymongo
pymongo.MongoClient()
# MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True)
</pre>
silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-23976274933782455232017-11-28T11:54:00.000-05:002017-11-28T11:54:06.736-05:00CentOS 7 Postfix relay (gmail)<h3> How to send emails trough a smart relay that uses SASL and TLS </h3>
<p> I used:
<ul>
<li> CentOS Linux release 7.3.1611 </li>
<li>postfix-2.10.1-6.el7.x86_64</li>
</ul>
The rpm comes from CentOS yum Base.
</p>
<p>
<h4> The setup </h4>
<h5> File: /etc/postfix/main.cf </h5>
This is the main configuration for postfix in regards to how you would like
to behave.
<pre>
smtpd_banner = $myhostname ESMTP $mail_name
biff = no
append_dot_mydomain = no
readme_directory = no
smtpd_tls_session_cache_timeout=3600s
tls_random_source=dev:/dev/urandom
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_tls_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl/password
smtp_use_tls = yes
smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.trust.crt
smtp_tls_loglevel = 1
smtp_tls_security_level = encrypt
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = ${OPTIONAL_HOSTNAME}
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
mydestination = $myhostname localhost.$mydomain
relayhost = [${mail.RELAY}]:587
mynetworks = 127.0.0.0/8
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = localhost
inet_protocols = ipv4
# comment these two when done
debug_peer_list = ${mail.RELAY}
debug_peer_level = 3
</pre>
<h5> File: /etc/postfix/sasl/password </h5>
Write into the file the username and password that you use to authenticate.
<pre>
[${mail.RELAY}] ${user@domain}:${PASSWORD}
</pre>
Once you save the file you need to create the database, in this case it's hash
<pre>
cd /etc/postfix/salsl && postmap password
</pre>
At this point restart postfix
<pre>
systemctl restart postfix
</pre>
</p>
<p>
<h4>The problem </h4>
Since all that is configured is ok ... you would expect that now you can send email however ...
<pre>
smtp_sasl_authenticate: mail.RELAY[IPV4]:587: SASL mechanisms PLAIN LOGIN
warning: SASL authentication failure: No worthy mechs found
...
send attr reason = SASL authentication failed; cannot authenticate to server mail.RELAY[IPV4]: no mechanism available
</pre>
The main problem is that the username and password works fine ... you can test by using telnet
<pre>
# First compute the base64 encoded string. \0 is a null terminated string
printf '${user@domain}\0${user@domain}\0${PASSWORD}' | base64
# telnet to the smtp relay
telnet ${mail.RELAY}
EHLO ${OPTIONAL_HOSTNAME}
250-server.example.com
250-PIPELINING
250-SIZE 10240000
250-ETRN
250-AUTH DIGEST-MD5 PLAIN CRAM-MD5
250 8BITMIME
AUTH PLAIN ${COMPUTED_STRING_FROM_PRINTF}
235 Authentication successful
</pre>
So what is not working ?!
Based on the errors we've seen postfix complains that there is no <b>worthy mechs</b> ... that may lead you to read more into the source code.
Bottom line since Postfix uses Cyrus SASL library as per <a href="https://github.com/robn/postfix/blob/master/html/SASL_README.html#L340"> Postfix documentation</a> you actually need to install <b>cyrus-sasl-lib </b>
<pre>
yum install -y cyrus-sasl cyrus-sasl-lib cyrus-sasl-plain
# restart postfix
systemctl restart postfix
</pre>
At this point if you keep the debug on you will see
<pre>
....
smtp_sasl_authenticate: ${mail.RELAY}[${IPV4}]:587: SASL mechanisms PLAIN LOGIN
xsasl_cyrus_client_get_user: ${user@domain}
xsasl_cyrus_client_get_passwd: ${PASSWORD}
...
... 235 2.7.0 Authentication successful
</pre>
Note: all symbols ${} should be replace with your relevant information. The value of myhostname is optional into /etc/postfix/main.cf if not present postfix uses your hostname.
</p>silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-77933086893004500552017-11-01T09:46:00.000-04:002017-11-01T09:46:38.116-04:00Zabbix server under Selinux (Centos 7) <h2> Zabbix server under Selinux (CentOS 7) </h2>
<p> When running zabbix server under Selinux out of the box when you
start <br/> <code>systemctl start zabbix-server</code> <br/> you will get an error like
this into <code> /var/log/zabbix/zabbix_server.log </code> </p>
<pre>
using configuration file: /etc/zabbix/zabbix_server.conf
cannot set resource limit: [13] Permission denied
cannot disable core dump, exiting...
Starting Zabbix Server. Zabbix 3.0.12 (revision 73586).
</pre>
<p> The problem is related to zabbix policy under Selinux.</p>
<h4> How to Fix it </h4>
<p> First as the message says zabbix server needs to set some resource limits.
<br/>
To do so will need to have permissions from selinux. Run the following to see
the error and transform it into a format that selinux can load later.
<br/>
<code>
cat /var/log/audit/audit.log | grep zabbix_server | grep denied | audit2allow -M zabbix_server.limits
</code>
<p>Two files are created a .pp and a .pe. The .pe file should have content similar to </p>
<pre>
module zabbi_server.limits 1.0;
require {
type zabbix_t;
class process setrlimit;
}
#============= zabbix_t ==============
allow zabbix_t self:process setrlimit;
</pre>
<p> Load this policy with <code>semodule -i zabbix_server.limits.pp </code> </p>
<p> At this point zabbix server can be started <code> systemctl start zabbix-server</code>
<br/>
If you need to connect to a database such as mysql/postgress you will need to allow zabbix server again ... (note: I used mysql/mariadb)
</p>
<code>
cat /var/log/audit/audit.log | grep zabbix_server | grep denied | audit2allow -M zabbix_server.ports
</code>
<p>This will create again two files, the .pe file should look like
<pre>
module zabbix_server_ports 1.0;
require {
type mysqld_port_t;
type zabbix_t;
class process setrlimit;
class tcp_socket name_connect;
}
#============= zabbix_t ==============
#!!!! This avc can be allowed using the boolean 'zabbix_can_network'
allow zabbix_t mysqld_port_t:tcp_socket name_connect;
#!!!! This avc is allowed in the current policy
allow zabbix_t self:process setrlimit;
</pre>
As you can see the <b>setrlimits</b> is already present and you will need to allow the socket access.
<br/>
To do so <code> semodule -i zabbix_server.ports.pp </code>
</p>
<p> At this point you have two policies loaded and you should restart zabbix server <code> systemctl restart zabbix-server </code>
<br/>
<b>Note:</b> This may apply to any other version of Linux distros/versions that use Selinux though I only tried on CentOS 7.
</p>silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com2tag:blogger.com,1999:blog-5295034663176268607.post-57248002755288931352017-02-10T09:19:00.000-05:002017-02-10T09:19:00.881-05:00MongoDB shell - query collections with special characters <p>
From time to time I found in MongoDB collections that have characters that get interpreted by the mongo shell in
a different way and you can't use it as is.
</p>
<p>
Some example: If your collection name is Items:SubItems and you try to query as you would normally do
<pre class="prettyprint prettyprinted">
mongos> db.Items:SubItems.findOne()
2017-02-10T14:11:17.305+0000 E QUERY SyntaxError: Unexpected token :
</pre>
The 'fix' is to use a special javascript notation - so this will work
<pre class="prettyprint prettyprinted">
mongos> db['Items:SubItems'].stats()
{
...
}
This is called 'Square bracket notation' in javascript.
See <a href="https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Operators/Property_accessors">Property_accessors</a> for more info.
</pre>
silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-91366260044488449842016-12-06T18:45:00.002-05:002016-12-06T18:45:53.458-05:00Password recovery on Zabbix server UI<p> In case you need it ... </p>
<p> Obtain access to the database for read/write (for mysql this is what you need) </p>
<pre>
update zabbix.users set passwd=md5('mynewpassword') where alias='Admin';
</pre>silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-44908294606054352152016-11-16T10:03:00.002-05:002016-11-16T10:04:41.723-05:00Netcat HTTP server<p> Netcat is a very versatile program used for network communications - the place to find it is <a href="http://nc110.sourceforge.net/"></a>. </p>
<p> Often I need to test different programs with a dummy HTTP server, so using netcat for this is very easy. </p>
<p> Lt's say you want to respond with HTTP code 200 ... this is what you do with netcat into a shell
<pre class="prettyprint prettyprinted">
nc -k -lp 9000 -c 'echo "HTTP/1.1 200 OK\nContent-Length:0\nContent-Type: text/html; charset=utf-8"' -vvv -o session.txt
</pre>
To explain the switches used:
<ul>
<li> -k accept multiple connections, won't stop netcat after first connection(default) </li>
<li> -l listen TCP on the all interfaces </li>
<li> -p the port number to bind </li>
<li> -c 'echo "HTTP/1.1 200 OK\nContent-Length:0\nContent-Type: text/html; charset=utf-8"' is the most interesting one ... this responds back to the client with a minimal http header and sets code 200 OK </li>
<li> -vvv verbosity level </li>
<li> -o session.txt netcat will write into this file all the input and output</li>
</ul>
Now you have a dummy http server running on port 9000 that will answer 200 OK ALL the time :)
</p>
silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-50702138028647155622016-03-28T10:07:00.000-04:002016-03-28T10:07:05.721-04:00Backups with Duplicity and Dropbox<p>
Dropbox is a very popular service for file storage, the way the service works will synchronize by default<br/>
all your files across your devices. This is important to know since you will be backing up data into<br/>
Dropbox and you don't want to download the backups on <i>every device</i> you have connected.
</p>
<p>
What we want to do is to backup files, encrypt them and send them to Dropbox. <br/>
All this is achieved with <a href="http://duplicity.nongnu.org/">Duplicity</a>.
</p>
<p>
This is the setup
<ul>
<li>Linux OS, any distro will work I guess but I tried on Ubuntu 14.04 LTS</li>
<li>Dropbox account (going pro or business is recommended since backups will typical grow over 2GB basic account) </li>
</ul>
</p>
<p>
To encrypt files you will need <a href="https://www.gnupg.org/">GPG</a>, in case you don't have a key on your system <br/>
we need to do a bit of work, if you do have a gpg key you can skip the next section.
</p>
<h4> GPG Setup</h4>
<p>
In this section will create GPG public key/private keys that will be used to encrypt the data you backup to Dropbox.<br/>
<pre class="prettyprint prettyprinted">
# install
$ sudo apt-get install gnupg
#
# check if you have any keys
#
$ gpg --list-keys
# if this is empty than you need to create a set of keys
# follow the wizard to create keys
#
$ gpg --gen-key
gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
gpg: keyring `/home/yourname/.gnupg/secring.gpg' created
Please select what kind of key you want:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct? (y/N) y
You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
"Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"
Real name: Your Name
Email address: yourname@gmail.com
Comment:
You selected this USER-ID:
"Your Name <yourname@gmail.com>"
Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
....+++++
..+++++
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
+++++
gpg: checking the trustdb
....
#
#
# At this point the keys are created and saved into your keyring
# list keys
#
#
$ gpg --list-keys
/home/yourname/.gnupg/pubring.gpg
--------------------------------
pub 2048R/999B4B79 2016-03-26
^^^^^^^^ /used by duplicity
uid Your Name <yourname@gmail.com>
sub 2048R/99917D12 2016-03-26
# Note 999B4B79 which is your keyid
</pre>
</p>
<h4> Duplicity install </h4>
<pre class="prettyprint prettyprinted">
$ sudo apt-get install duplicity
</pre>
<p>
After installation if you are on Ubuntu 14.04 LTS you will need to apply
this patch <br/>
http://bazaar.launchpad.net/~ed.so/duplicity/fix.dpbx/revision/965#duplicity/backends/dpbxbackend.py
<br/>
to /usr/lib/python2.7/dist-packages/duplicity/backends/dpbxbackend.py
<br/>
If you don't know how to apply the patch is simpler to open the file at line 75 and write the following<br/>
<pre class="prettyprint prettyprinted">
72 def command(login_required=True):
73 """a decorator for handling authentication and exceptions"""
74 def decorate(f):
75 def wrapper(self, *args):
76 from dropbox import rest ## line to add
77 if login_required and not self.sess.is_linked():
78 log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin)
</pre>
</p>
<h4> Dropbox and duplicity setup</h4>
<p>
You need to have an account first. Open your browser and login.
</p>
<h4> Backups with duplicity and dropbox </h4>
<p>
Since this is the first time you run it need to make a authorization token, this is done as follow
</p>
<pre class="prettyprint prettyprinted">
$ duplicity --encrypt-key 999B4B79 full SOURCE dpbx:///
------------------------------------------------------------------------
url: https://www.dropbox.com/1/oauth/authorize?oauth_token=TOKEN_HERE
Please authorize in the browser. After you're done, press enter.
</pre>
<p>
Now into your browser authorize the application. This will create an access token into dropbox.<br>
You can see the apps you have going to <a href="https://www.dropbox.com/account#security">Security</a> <br>
Should see under Apps linked <b>backend for duplicity</b> <br/>
In case you need to know what token is in use you can see it onto you system <i> ~/.dropbox.token_store.txt</i>
</p>
<pre class="prettyprint prettyprinted">
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
GnuPG passphrase:
Retype passphrase to confirm:
--------------[ Backup Statistics ]--------------
StartTime 1459031263.59 (Sat Mar 26 18:27:43 2016)
EndTime 1459031263.73 (Sat Mar 26 18:27:43 2016)
ElapsedTime 0.14 (0.14 seconds)
SourceFiles 2
SourceFileSize 1732720 (1.65 MB)
NewFiles 2
NewFileSize 1732720 (1.65 MB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 2
RawDeltaSize 1728624 (1.65 MB)
TotalDestinationSizeChange 388658 (380 KB)
Errors 0
-------------------------------------------------
</pre>
<h4> Backups </h4>
When the first full backup finished you can start making incremental backups, list the backups etc.
<pre class="prettyprint prettyprinted">
# list the backup files
duplicity --encrypt-key 999B4B79 list-current-files dpbx:///
#
## Make an incremental backup
duplicity --encrypt-key 999B4B79 incr SOURCE dpbx:///
.....
.....
.....
duplicity --encrypt-key 999B4B79 list-current-files dpbx:///
</pre>
<h4>Troubleshooting</h4>
<p> During a backup if you see something like </p>
<pre class="prettyprint prettyprinted">
Attempt 1 failed. NameError: global name 'rest' is not defined
Attempt 2 failed. NameError: global name 'rest' is not defined
</pre>
<p> See the note about Ubuntu 14.04 because you need to patch the dpbxbackend.py file </p>
<h4>Notes</h4>
<p>
If you use multiple computers and don't want to download from dropbox all <br/>
the backups you need to enable selective sync and exclude the Apps/duplicity <br/>
folder from Dropbox. <br/>
I haven't used duplicity for long time and heard some mix opinions, some say is excellent and some<br/>
say has some design flows (didn't checked) where your full backup will be taken after a while even if<br/>
you just do incremental. Remains to be seen. <br/>
I guess if this doesn't work well I would look into<a href="https://borgbackup.readthedocs.org/en/stable/"> Borg Backup</a> which seems to be the best these days since<br/>
has dedup built in and many other features. One thing that doesn't though is many backends as duplicity which<br/>
can use pretty much all cloud storage solutions around :).<br/>
</p>
silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-76251444671129412302016-01-13T12:16:00.002-05:002016-01-13T12:20:35.697-05:00Sublime Text X11 Forward - linux headless<p>
On a newer editors (compared with Vim or Emacs) is <a href="https://www.sublimetext.com/">Sublime Text</a>.
<br/>
Has many useful features and is quite popular these days, combined with the vintage_keys enabled (vim emulation) is <br/>
quite interesting.
</p>
<p>
This post shows what I did to have sublime text 3 working on a <b>remote headless linux server</b>, I used CentOS 7.1 installed
with the group Base.
</p>
<p>
Since sublime text needs a display to run you will need to install a few packages.
</p>
<pre class="prettyprint prettyprinted">
sudo yum install gtk2
sudo yum install pango
sudo yum install gtk2-devel
sudo yum install dejavu-sans-fonts # or the font of your choice
sudo yum install xorg-x11-xauth
</pre>
<p>
After all these packages are installed the ssh server (sshd for CentOS) needs to have the following settings.
<pre class="prettyprint prettyprinted">
# /etc/ssh/sshd_config
X11Forwarding yes
X11DisplayOffset 10
TCPKeepAlive yes
X11UseLocalhost yes
</pre>
Restart sshd in case you changed your config file
<pre>
sudo systemctl restart sshd
</pre>
</p>
<p>
I used putty on a windows box so I had to make a small hack.
<pre class="prettyprint prettyprinted">
cd $HOWE
touch .Xauthority # empty file
</pre>
<h5>Windows based</h5>
Configure putty to enable X11 Forwarding and connect to your server.
<br/>
One more thing to mention is that if you use Windows than you will need to install a program <a href="http://sourceforge.net/projects/xming/">Xming</a>
<br/>
After you download run the installer and start the Xming server.
<br>
<h5>Linux</h5>
You will need to run a X server - doesn't matter which one and have X11 forward it into the agent.
<pre class="prettyprint prettyprinted">
# when connect add the -X
ssh -X my_host_with_sublime_installed
# Or you enabled X11Forward into your .ssh/config
# something like this will do
Host *
ForwardX11 yes
</pre>
<br>
<br>
In case that sublime text is not installed, download from their site (is always nice to have a license too), extract <br/>
the files, typically you would have a directory called sublime_text_3.
<br>
<pre class="prettyprint prettyprinted">
# check first that the display is forward it
$ echo $DISPLAY
localhost:10.0
$ cd sublime_text_3
$ ./sublime_text --wait
#
</pre>
At this point onto your local screen(display) you should see a window pop up with sublime text.
<br/>
</p>silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-51700494955831349672015-08-22T09:47:00.000-04:002015-08-29T20:01:09.965-04:00Vagrant with libvirt(KVM) Ubuntu14<p>
Vagrant doesn't have an official provider for libvirt but there is a plugin
that allows to run via libvirt KVM on Linux.
</p>
<p>
First you would think why not VirtualBox/VmWare etc. - simply because KVM is built
in and is very lightweight(especially if you run it on your laptop). Also if you have
pre-made virtual machines with kvm you can easily package them as Vagrant boxes.
</p>
<p>
This is what you need to get started on Ubuntu 14.
</p>
<p>
Obtain package (could be a different version) wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.4_x86_64.deb
<br/>
Install package
</p>
<pre class="prettyprint prettyprinted">
$ sudo dpkg -i vagrant_1.7.4_x86_64.deb
</pre>
<p>
Install kvm, virt-manager, libvirt and ruby-dev
<pre class="prettyprint prettyprinted">
$ sudo apt-get install ruby-dev
$ sudo apt-get install kvm virt-manager
$ sudo apt-get install libvirt-dev
</pre>
</p>
<p>
Remove just in case ruby-libvirt as we need a specific version
<pre class="prettyprint prettyprinted">
$ sudo apt-get remove ruby-libvirt
</pre>
Instal from gem
<pre class="prettyprint prettyprinted">
$ sudo gem install ruby-libvirt -v '0.5.2'
</pre>
</p>
<p>
Install the plugin
<pre class="prettyprint prettyprinted">
$ sudo vagrant plugin install vagrant-libvirt
</pre>
_Note_: Installed the plugin 'vagrant-libvirt (0.0.30)'!
</p>silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-70206358221682410492014-12-18T09:26:00.000-05:002014-12-18T09:26:24.123-05:00Supervisor (python supervisord) email alerts<p>
The program supervisor written in python is used to <i>supervise</i> long running processes.
In case that a long running process will stop (crash) supervisor will detect it and will restart
it, you will get entries into the log files however unless you have a log aggregation tool or
you login into the server or have some other monitoring tool <i>you will not know</i> that your
process has crashed.
</p>
<p>
However there is hope :) - you can setup an event listener into supervisor which can email you in
case that a process has exit. To do so you will need to install a python package <b>superlance</b>
This is how the setup is done.
</p>
<pre class="highlight">
# install superlance
$ sudo pip install superlance # if you don't have pip install try easy_install
# configure supervisor to send events to crashmail
$ sudo vim /etc/supervisor/supervisord.conf # change according to your setup
[eventlistener:crashmail]
command=crashmail -a -m root@localhost
events=PROCESS_STATE_EXITED
$ sudo supervisor stop && sudo supervisor start
# done :)
</pre>
<p>
In the example above if a process will crash (exit) an event will be sent to crashmail which in turn
will email to root@localhost - of course you can change the email address, crashmail uses actually sendmail
to send email (postfix and qmail come with a sendmail like program so no worries).
<br/>
Also the email alert will be sent out for any program that crashed but if you want to filter out you can choose
just the program you want by specifying -p program_name instead if -a, for more info you can see <a href="http://superlance.readthedocs.org/en/latest/crashmail.html#command-line-syntax"> Crashmail </a> section on the superlance docs.
</p>
silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com3tag:blogger.com,1999:blog-5295034663176268607.post-58065215503510286572014-11-21T10:51:00.002-05:002014-11-21T10:51:34.307-05:00Gitlab(Rails) gem loader error<p> I was trying to make a simple bash pre-receive hook into Gitlab and got one of this </p>
<pre class="highlight">
# pre-receive hook
#!/bin/bash
`knife node show`
# Error
/usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.5/lib/bundler/rubygems_integration.rb:214:in `block in replace_gem': chef is not part of the bundle. Add it to Gemfile. (Gem::LoadError)
</pre>
<p>
Initially I thought I can change the hook to ruby and will fix it but after I tried all 6 ways to <br/>
execute a command according to <a href="http://tech.natemurray.com/2007/03/ruby-shell-commands.html"> http://tech.natemurray.com/2007/03/ruby-shell-commands.html </a>
and no luck I looked further into the gem specs for Rails and it looks like you can't load a gem that is not <br/>
declared into the Gemfile for your application.
</p>
<p>
So - what options you have really ? Install all gems and their dependencies into the Rails application Gemfile just to execute a <br/>
command ?! Well there is a different way <i>sudo</i> to the rescue :)
</p>
<pre class="highlight">
# pre-receive hook
#!/bin/bash
`sudo -u USER_THAT_RUNS_THE_APP knife node show`
# also you need to make sure into sudoers that the USER_THAT_RUNS_THE_APP has the right to execute without tty
Defaults:USER_THAT_RUNS_THE_APP !requiretty
</pre>
silviu dicuhttp://www.blogger.com/profile/12270639167316589111noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-24501069070483340332014-09-14T09:25:00.000-04:002014-09-14T09:25:15.208-04:00Vim - find occurrences in files.Vim is the editor for anybody using the cli on daily bases. One useful feature it has is the find/grep into files.
Obviously you can exit or suspend vim and do a find or grep however not many know that vim has this built in.
You can simply use vimgrep and the likes - for more info <a href="http://vim.wikia.com/wiki/Find_in_files_within_Vim">http://vim.wikia.com/wiki/Find_in_files_within_Vim.</a>
silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-71749958058989641152014-03-25T19:58:00.001-04:002014-03-25T19:58:33.132-04:00Vim setup for Chef(Opscode) Cookbooks <p>I've started programming seriously Chef cookbooks by a while but always felts is something missing ...
Well I didn't have </p>
<ul>
<li> jump to definition for any Chef dsl </li>
<li> auto completion </li>
<li> syntax highlight </li>
</ul>
<p>Recently I found a solution for this, this is my vim setup(just as easy you can do it in Sublime Text as well)
These are the tools in my setup </p>
<ul>
<li>vim</li>
<li>vim-chef</li>
<li>ripper-tags (by my surprise ctags doesn't work well with ruby files ...) </li>
</ul>
<p> To setup is as simple as </p>
<pre class="highlight">
# vim with pathogen
$ git clone https://github.com/vadv/vim-chef ~/.vim/bundle/vim-chef
$ sudo /opt/chef/embedded/bin/gem install gem-ripper-tags
$ knife cookbook create test_coobook -o .
# create tags - there are better ways to do it - see gem-tags for example
$ ripper-tags -R /opt/chef/embedded/lib/ruby/gems/1.9.1/gems/chef-11.10.4 -f tags
$ ctags -R -f tags_project
vim
:set tags=tags,tags_project
# done
</pre>
silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-8567446457104435482014-03-02T09:10:00.002-05:002014-03-02T09:10:48.281-05:00Getting started with the new AWS toolsAWS replaced their java based tool with a neat python package for linux (didn't try the windows based ones yet ...).
Why are these tools nice ?!
- written in python
- support from one tool for all services
- wizard configuration
To get started
<pre class="prettyprint">
# use virtualenv or global
# this example shows virtualenv
$ mkdir AWS
$ virtualenv AWS
...
$ source AWS/bin/activate
# install the tools from pypi
$ pip install awscli
...
# configure
$ aws configure
AWS Access Key ID [None]: XXXXXX
AWS Secret Access Key [None]: XXXXXX
Default region name [None]: us-west-1
Default output format [None]: json
$ aws ec2 describe-regions
{
"Regions": [
{
"Endpoint": "ec2.eu-west-1.amazonaws.com",
"RegionName": "eu-west-1"
},
{
"Endpoint": "ec2.sa-east-1.amazonaws.com",
"RegionName": "sa-east-1"
},
{
"Endpoint": "ec2.us-east-1.amazonaws.com",
"RegionName": "us-east-1"
},
{
"Endpoint": "ec2.ap-northeast-1.amazonaws.com",
"RegionName": "ap-northeast-1"
},
{
"Endpoint": "ec2.us-west-2.amazonaws.com",
"RegionName": "us-west-2"
},
{
"Endpoint": "ec2.us-west-1.amazonaws.com",
"RegionName": "us-west-1"
},
{
"Endpoint": "ec2.ap-southeast-1.amazonaws.com",
"RegionName": "ap-southeast-1"
},
{
"Endpoint": "ec2.ap-southeast-2.amazonaws.com",
"RegionName": "ap-southeast-2"
}
]
}
# Done!
</pre>
For more info the project is hosted at <a href="https://github.com/aws/aws-cli">github.com</a>
The reference table <a href="http://docs.aws.amazon.com/cli/latest/reference/">Aws tools references</a>
and the home page at <a href="http://aws.amazon.com/cli/">aws.amazon.com/cli</a>.silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-46919184128981653102013-11-20T20:32:00.000-05:002013-11-20T20:32:04.245-05:00Javascript testing with Real Browsers<pre class="prettyprint">
karma run --runner-port 9100
PhantomJS 1.4 (Linux): Executed 1 of 1 SUCCESS (0.397 secs / 0.071 secs)
Chrome 30.0 (Linux): Executed 1 of 1 SUCCESS (0.518 secs / 0.06 secs)
TOTAL: 2 SUCCESS
</pre>silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0tag:blogger.com,1999:blog-5295034663176268607.post-59447012654952205262013-10-20T11:00:00.001-04:002013-10-20T11:12:29.937-04:00Chef server internal error (11.08)Tried the new version of chef-server 11.08 and looks like is broken.
There is a bug into the jira <a href="https://tickets.opscode.com/browse/CHEF-4339">CHEF-4339</a>.
I tried onto CentOS but looks like Ubuntu is broken as well (see bug description).
How to see the error logs
<pre class="literal-block prettyprint prettyprinted">
$ chef-server-ctl tail
==> /var/log/chef-server/nginx/access.log <==
192.168.122.1 - - [20/Oct/2013:14:56:42 +0000] "PUT /sandboxes/000000000000a38d5dd8e2763f913c6c HTTP/1.1" 500 "8.109" 36 "-" "Chef Knife/11.6.0 (ruby-1.9.3-p429; ohai-6.18.0; x86_64-linux; +http://opscode.com)" "127.0.0.1:8000" "500" "8.049" "11.6.0" "algorithm=sha1;version=1.0;" "chef-user" "2013-10-20T14:54:00Z" "oMRtV6loUDnbKJuGcW6nqBbF8ww=" 1029
==> /var/log/chef-server/erchef/current <==
2013-10-20_14:56:42.62140
2013-10-20_14:56:42.62144 =ERROR REPORT==== 20-Oct-2013::14:56:42 ===
2013-10-20_14:56:42.62145 webmachine error: path="/sandboxes/000000000000a38d5dd8e2763f913c6c"
2013-10-20_14:56:42.62145 {error,
2013-10-20_14:56:42.62146 {throw,
2013-10-20_14:56:42.62146 {checksum_check_error,26},
2013-10-20_14:56:42.62146 [{chef_wm_named_sandbox,validate_checksums_uploaded,2,
2013-10-20_14:56:42.62147 [{file,"src/chef_wm_named_sandbox.erl"},{line,144}]},
2013-10-20_14:56:42.62147 {chef_wm_named_sandbox,from_json,2,
2013-10-20_14:56:42.62148 [{file,"src/chef_wm_named_sandbox.erl"},{line,99}]},
2013-10-20_14:56:42.62148 {webmachine_resource,resource_call,3,
2013-10-20_14:56:42.62148 [{file,"src/webmachine_resource.erl"},{line,166}]},
2013-10-20_14:56:42.62149 {webmachine_resource,do,3,
2013-10-20_14:56:42.62149 [{file,"src/webmachine_resource.erl"},{line,125}]},
2013-10-20_14:56:42.62150 {webmachine_decision_core,resource_call,1,
2013-10-20_14:56:42.62150 [{file,"src/webmachine_decision_core.erl"},{line,48}]},
2013-10-20_14:56:42.62150 {webmachine_decision_core,accept_helper,0,
2013-10-20_14:56:42.62151 [{file,"src/webmachine_decision_core.erl"},{line,583}]},
2013-10-20_14:56:42.62151 {webmachine_decision_core,decision,1,
2013-10-20_14:56:42.62151 [{file,"src/webmachine_decision_core.erl"},{line,489}]},
2013-10-20_14:56:42.62152 {webmachine_decision_core,handle_request,2,
2013-10-20_14:56:42.62153 [{file,"src/webmachine_decision_core.erl"},{line,33}]}]}}
==> /var/log/chef-server/erchef/erchef.log.1 <==
2013-10-20T14:56:42Z erchef@127.0.0.1 ERR req_id=rOkhxZcSowyaKaD+WsjFKg==; status=500; method=PUT; path=/sandboxes/000000000000a38d5dd8e2763f913c6c; user=chef-user; msg=[]; req_time=8043; rdbms_time=5; rdbms_count=2; s3_time=8028; s3_count=1
</pre>
However the integration tests all pass ...
<pre class="literal-block prettyprint prettyprinted">
$ chef-server-ctl test
...
Sandboxes API Endpoint
Sandboxes Endpoint, POST
when creating a new sandbox
should respond with 201 Created
Sandboxes Endpoint, PUT
when committing a sandbox after uploading files
should respond with 200 OK
Deleting client pedant_admin_client ...
Deleting client pedant_client ...
Pedant did not create the user admin, and will not delete it
Deleting user pedant_non_admin_user ...
Deleting user knifey ...
Finished in 54.02 seconds
70 examples, 0 failures
</pre>
Hopefully will be fixed soon.silviudhttp://www.blogger.com/profile/02091543407707234135noreply@blogger.com0