Rackspace Cloud Monitoring Issue with Debian Squeeze

cloud_monitoringThis was an issue I found with Rackspace Cloud Monitoring install on Debian 6 (Squeeze). After running through the initial run-through found here:

http://www.rackspace.com/knowledge_center/article/install-the-cloud-monitoring-agent#Setup

On step 5 - Install the agent.

1
sudo apt-get install rackspace-monitoring-agent

After running this the results were:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apt-get install rackspace-monitoring-agent
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
rackspace-monitoring-agent
0 upgraded, 1 newly installed, 0 to remove and 11 not upgraded.
Need to get 0 B/3,799 kB of archives.
After this operation, 10.6 MB of additional disk space will be used.
Selecting previously deselected package rackspace-monitoring-agent.
(Reading database ... 15499 files and directories currently installed.)
Unpacking rackspace-monitoring-agent (from .../rackspace-monitoring-agent_0.1.9-89_amd64.deb) ...
Setting up rackspace-monitoring-agent (0.1.9-89) ...
update-rc.d: using dependency based boot sequencing
Stopping rackspace-monitoring-agent: rackspace-monitoring-agent failed!

The only info in the logs was:

1
2
Fri Feb 21 07:37:37 2014 INF: Log file started (pid 1055, path=/var/log/rackspace-monitoring-agent.log)
Fri Feb 21 07:37:37 2014 ERR: 'monitoring_token' is missing from 'config'

Great, not really telling me much. So I looked around and came across a git located here https://github.com/virgo-agent-toolkit/rackspace-monitoring-agent.

After running through the git install.

1
2
3
4
5
git clone https://github.com/racker/virgo virgo-0.1.9
cd virgo-0.1.9
git submodule update --init --recursive
./configure && make
make install

You will want to make sure you have the following installed:

1
build-essential && devscripts

From here you can proceed to installing the rackspace-monitoring-agent by following the Rackspace guide mentioned above and then you should be able to run through the installation as normal.


Turbolift with Cloud Files

cloud-files-icon1Writing this article on a tool that I have come across for Rackspace Cloud Files. This is a tool called 'Turbolift' which can be found in the GitHub here. I have seen issues where people want to bulk upload or even bulk delete data from any given container. I am going to run threw a few snippets using Turbolift that may help if you ever find yourself in the same situation.

Before you 'git clone' turbolift you will want to install a few tools. Turbolift is a Python written utility so a few things are required before installing turbolift. The main 2 packages you will need are 'python-dev' and 'python-setuptools'. Once you have this installed, you can proceed to installing turbolift by running the following.

1
2
3
git clone git://github.com/cloudnull/turbolift.git
cd turbolift
sudo python setup.py install

After this you can run the following to see all options available for this tool.

1
turbolift -h

The main 2 options I will touch on is the 'upload' and 'delete' options available for this tool. You can run the following to view the switches needed for these options.

1
2
3
turbolift upload -h
and
turbolift delete -h

The following option will allow you to upload an entire local directory to your Rackspace Cloud Files container:

1
turbolift -u [CLOUD-USERNAME] -a [CLOUD-API-KEY] --os-rax-auth [REGION] upload -s [PATH-TO-DIRECTORY] -c [CONTAINER-NAME]

Now there was a bit of an issue I ran into with upload a bulk directory. By default when getting the list of files to upload Turbolift will sort the files by size. If you have a lot of files this may be a time consuming operation. I was able to add the following to the 'optionals.py' file located in directory 'turbolift/turbolift/arguments/'. You will need to add an extra argument to this file:

1
2
3
4
5
6
optionals.add_argument('--no-sort',
                           action='store_true',
                           help=('By default when getting the list of files to upload '
                                     'Turbolift will sort the files by size. If you have a lot '
                                     'of files this may be a time consuming operation. This flag will '
                                     'disable that function.'))

The --no-sort will allow the upload to go without having to sort the files by size. This will simply start turbolift without the sort check.

Now with deletes (be careful because there are no backups for these) There are a few switches you can use to make the process work as fast as possible.

1
turbolift -u [CLOUD-USERNAME] -a [CLOUD-API-KEY] --os-rax-auth [REGION] delete -c [CONTAINER-NAME]

Keep in mind, any operation is done via the public net, you can use the option '--internal' so that these operations happen on the service net (private network using a cloud server in the same Region). This does speed up operation as one would determine.

Recently I have attempted to delete over a TB of data (about 4 millon objects) from a container using a 2GB server. Turbolift practically laughed at me. For a deletion of that size, you will want to use something like a 15-30GB slice. I found using a 30Gb slice did the trick. Might be a little pricey for the slice but I only needed it for about 6 hours. I was able to see that turbolift/python was utilizing about 20GB of memory on the server. So keep that in mind when you come up to big deletion of that size. Luckily turbolift outputs its logs to a file named 'turbolift.log'. You know, for your enjoyment :D

On an extra note, the option 'turbolift clone -h works great when you want to clone a container to another container in the same Region or to a container in another Region, which ever you prefer.

1
turbolift -u [CLOUD-USERNAME] -a [CLOUD-API-KEY] --os-rax-auth [SOURCE-REGION] clone -sc [SOURCE-CONTAINER-NAME] -tc [DESTINATION-CONTAINER-NAME] -tr [DESTINATION-REGION]

I hope this helps as I know I needed it personally. If you have any questions, feel free to ask.
 


JungleDisk on Linux 64bit

jungledisk-linuxSo this is almost a repeat of the article I wrote earlier Installation Error with JungleDisk on Fedora, This was almost the same issue. Once you download your file located here, simply installing the package will not get JungleDisk running, especially on x86_64 OS's. For some reason some OS like Linux Mint, Jungle Disk does not play nice with the default icons.

What you will need to do is install new icons for your current theme. Simple icons can be found on gnome-look.org. Once you have this installed and changed you have one more step. Just like before you will need to create a symlink of libnotify.

Simply run the following commands:

1
sudo ln -s /usr/lib/x86_64-linux-gnu/libnotify.so.4 /usr/lib/x86_64-linux-gnu/libnotify.so.1

Once you have libnotify symlinked, run JungleDisk and it should bring you up to create/sign in to your account. Simple as that. I hope this helps you as much as it has helped me.

root-


Inxi to check software and hardware configuration

Inxi - A great tool that collects, formats and provides multiple ways to output data about your system configuration; Hardware and software.  You can find additional information on

http://code.google.com/p/inxi/

A sample image below shows you what data you can expect to see. Click on the image below to enlarge.

WP Lightbox 2 - About

-root

 


Mailgun Programmable Mail Server

mailgun_logoMailGun - a set of powerful APIs that allow you to send, receive and track your email. Simple tool built for deveoplers. Mailgun is a programmable email platform. It allows your application to become a fully featured email server. Send and receive messages, create mailboxes and email campaigns with ease using your favorite programming language.

Optimized Deliverability
-Get emails delivered to inbox.
-Processing of ESP feedback to optimize sending rates.
-Clean IP addresses and whitelist registrations.
-Proper authentication using SPF and DKIM.
-Automated bounce, unsubscribe and complaint handling.

Receiving, Parsing & Storage
-Real email servers, not just SMTP relay.
-White label domains and spam filtering.
-Routes to filter, parse and POST messages to your app.
-Attachments, signatures and replies parsing.

Track Everything
-Analytics to measure and improve your performance.
-Track clicks, opens, unsubscribes, bounces and complaints.
-Create multiple campaigns with simple tagging.
-Get notified of events in real time via HTTP.
-Use with Mailing Lists for scalable newsletters.

Here is a sample cURL request:

1
2
3
4
5
6
curl -s -k --user api:key-3ax6xnjpsample29jd6fds4gc373sgvjxteol0 \
https://api.mailgun.net/v2/yourdomain.com/messages \
-F from='Excited User <root@linuxterminal.org>' \
-F to='Dude <dude@linuxterminal.org>'\
-F subject='Hey' \
-F text='Testing some Mailgun awesomeness!'

After the message is sent, you should get a delivery notice like the following:

1
2
{ "message": "Queued. Thank you.", "id": "<20130525054324.11585.46919@linuxterminal.org>" }
Delivered!

Check out there documentation found here!. Very helpful and a great product!


Collection of Cheat Sheets

overapi-logoI recently came across a site which holds a collection of cheat sheets, everything from Python, Ruby, MySQL, Linux and the list goes on! Check out OverAPI.com. They are a great reference point for almost everything.

Check it!

 

 


Start Apache without manually entering the SSL Passphrase

openssl-logoToday I started testing a SSL issuer 'STARTSSL' a free SSL certificate issuer that is authorized! Not self signed from my original thought. Visit the site here!

Well this is not on SSL issuer but more on how to remove a password from a private key Post. Since most SSL certificate on webserver reboot will ask for a password, I thought I would remove this. Simply run the following:

1
openssl rsa -in current_privatekey.key -out nopass_privatekey.key

Once this is created, update your config to look at the new key file. Restart Apache, the SSL cert should continue to work but without prompting for a password.

Enjoy


Using Top to show %CPU usage

Just thought I would share how to view and focus on %CPU usage, using the TOP command.

command - 'top'
change colors to while in top, click the 'Z' key
Focus on %CPU line click the 'X' key
to highlight the %CPU linux while focusing on %CPU click the 'B' key

Recorded a quick video on this as well if needed to show you what the output will look like, ENJOY:

 


Changing Timezones on Server

I get this question a lot. 'How do I change the timezone on my server?' Simple and quick, this can apply to both Ubuntu and CentOS Servers.

Run the following from the server:

1
2
3
~: date
Wed Apr 24 11:46:20 UTC 2013
~:

This will show you the current time, which on my server was set to UTC.

You can view all the timezones for Americas by doing the following:

1
~: ls /usr/share/zoneinfo/America/

Then you will want to do the following:

1
mv /etc/localtime /etc/localtime.bak04-24-2013

then

1
ln -s /usr/share/zoneinfo/America/Chicago /etc/localtime

Once this is complete type date and you should see the new time

1
2
~: date
Wed Apr 24 06:59:55 CDT 2013

Now it is set to CDT. Hope this helps for anyone looking to update there Timezone on their server.


IP issues with WP-Comments and Varnish

varnish-cacheSo I started running into issues after installing Varnish and noticed comments to any of my posts are coming from 127.0.0.1. If you are unsure of what varnish is, here is a brief explanation of what it can do for you.

What is Varnish?

 Varnish Cache is a web accelerator, sometimes referred to as a HTTP accelerator or a reverse HTTP proxy, that will significantly enhance your web performance.

Varnish speeds up a website by storing a copy of the page served by the web server the first time a user visits that page. The next time a user requests the same page, Varnish will serve the copy instead of requesting the page from the web server.

This means that your web server needs to handle less traffic and your website’s performance and scalability go through the roof. In fact Varnish Cache is often the single most critical piece of software in a web based business. - Varnish-Cache

Here is a great video to explain more, found here!

Back to my issue. Comments would come in and I was not able to determine the origin of the comment as varnish was causing issue. Here is a great work around that would solve this issue.

You will want to edit 2 files in your WordPress config.

wp-includes/pluggable.php

and

wp-includes/comment.php 

Add the following code to the pluggable.php file

1
if ( !function_exists('get_user_real_ip') ) { function get_user_real_ip() { $userip = ( $_SERVER['HTTP_X_FORWARDED_FOR'] ) ? $_SERVER['HTTP_X_FORWARDED_FOR'] : $_SERVER['REMOTE_ADDR']; return $userip; } }

Then in the comment.php file, comment out the first line and replace with the second line

1
2
/**     $commentdata['comment_author_IP']    = apply_filters('pre_comment_user_ip', $commentdata['comment_author_IP']); */
        $commentdata['comment_author_IP']    = preg_replace( '/[^0-9a-fA-F:., ]/', '',get_user_real_ip() );

Then save the files, no need to restart Apache but you can if you like. Then you can test by submitting a test comment to yourself. You should see the IP of the client host and not 127.0.0.1. Hope this helps anyone running into this issue.