Linux Tips 17: Find and replace text in multiple files

|
Multiple file find and replace is a rarely used, but an extremely time saving, ability of Linux that I cannot live without. It can be achieved by chaining a few commands together to get the list of files you want to change (find), make sure the files contain the strings you want to replace (grep), and do the replacements (sed).

Lets say we have a lot of code that uses the function registerUser that was implemented when there was only one class of users, but now another class of users need to access the system and what were "users" are now called "admins" so we need to change all calls to registerUser to registerAdmin. The command needed would be:


find . -type f | xargs grep -l 'registerUser' | xargs sed -i '' -e 's/registerUser/registerAdmin/g'

The first part of the command is find, which finds all files and excludes directories. That result is then piped to grep, which lists all files that contain registerUser. The results of is then sent to sed, which replaces all occurances of registerUser with registerAdmin.

The command is quite long and hard to remember, which is why I normally write it down somewhere. Having it archived on my blog means I can just look here in the future.

CakePHP - Using custom form tags without losing form auto fill magic

|
The CakePHP FormHelper is one of the biggest time savers in the whole framework. It not only takes care of generating common elements as well as performs auto-filling of pretty much all input fields. The only problem I find with it is the difficulty in generating forms using $form->create(...) that submit to the current page, ie a form with action="". This is because when the URL in $form->create(...) is set as '' or null, the default action URL of /model/add takes over.

The solution I end up with is often writing my own <form> and </form> tags instead of using $form->create() and $form->end(). The issue with using custom form tags is that CakePHP will no longer auto fill input fields meaning that a lot of code has to be implemented to detect and specify input values. Overcoming this is quite simple and simply requires the use of $form->end() with a custom form tag, such that your form code looks like:


<form action="" method="POST">
<?= $form->input("User.name") ?>
<?= $form->input("User.email") ?>
<?= $form->end() ?>

Linux Tips 16: Removing crashed processes

|
In my previous tip on monitoring Linux processes I wrote about top as a tool for doing just that. In this post I will extend on that and explain the main commands used to stop processes that are taking over the system with excessive CPU or memory usage. There are two commands.

The kill command will stop a process for a given process id (PID). To use it on a process with the PID 21194 simply execute kill -9 21194 and the process will be killed. If you need to remove more processes belonging to the same name, ie remove all Apache instances, you need to use the command killall -SIGKILL apache

If you need to kill all processes with a common name, ie PostgreSQL, which can create processes based on the database that it is serving you can use a combination of ps, grep, awk, and kill. The command is: kill `ps aux | grep "postgre" | grep -v "grep" | awk '{print $2}'` The main command is the ps aux... that takes the results of ps aux, finds &quto;postgre" using grep, then removes lines with "grep" and uses awk to get all secondary field values (the PID). kill then loops through all matching PIDs effectively killing all processes.

A good summary of web development ...

|


Why is this true? I think because most developers are not that invested in the products they produce and most customers want everything done yesterday, which leads to a lack of thought and consideration as to what is really important. Really, try asking a client what is important on a list of features and most of them will pick at least half, and when everything is important nothing can be. Although when everything is implemented 80% of the client's time will be spent using only 20% of features.

Is there a solution to this problem? I don't know. Contracting companies and clients both want to keep hours to a minimum and most are simply unable to picture perfection so whatever works well enough is often what is delivered and accepted. However I think that it would be beneficial with willing clients to try to reverse the default time allocation and spend the most time on design and testing and far less time on coding and ad-hoc changes. While I do not consider changes to be bad when working in an agile process there must be a clear vision for what the software is intended for so that feature creep does not become an issue.

Personally I do not mind creating a few solutions to give clients a real choice. Although in today's economic circumstances some may say that it is not financially wise to do so, however I believe that good products will keep on delivering value to both the creator and the client and is really more of an investment by both parties instead of just the end result of a simple transaction.

Linux Tips 15: Using top to identify slow processes

|
For a long time I had been using ps - aux as a way of seeing just what processes were slowing down a server or *nix based computer. However top is a far better tool for this task as it shows a real time view of processes. There are very few commands available and they can be found by typing ? once the program has launched. The most useful for my purposes is o, after pressing o, type cpu, to have top order all processes by cpu usage to find any runaway processes.

Top is an interactive command line program and looks like:

Using PHPMailer to create a centralized email system in Cake PHP

|
Prior to Cake PHP 1.2 there were no built-in email components that were readily available and people had to resort to using their own components with PHPMailer. I do still believe it is better to use a custom PHPMailer component under most circumstances as it is easier to configure, supports more services (ie Gmail), and gives people far greater email authoring options. However currently the number one reason for me to continue using a PHPMailer component is that fact that it allows me to centralize all email communication within an application; although my approach may strike some as mixing the controller/model boundaries too much.

The component is a basic wrapper around PHPMailer, which you should download and unpack into a folder called yourapp/app/vendors/mailer such that yourapp/app/vendors/mailer/class.phpmailer.php exist. Then in your components directory create the file mailer.php with the following content:


<?php
/*
Emails:
sendSampleEmail
(I normally list all emails here for quick reference)
*/
class MailerComponent extends Object {
var $phpMailer;
var $testMode = false;
var $from = 'automailer@example.com';
var $fromName = 'Example.com Automailer';
var $sig = "Regards,

Automailer";
var $parent;

// Create test email for use in testing mode
function createTestEmail($email) {
$email = preg_replace('/(.*?)@(.*)/', "testmail+\$1@gmail.com", $email);
return $email;
}

// Startup functions
function startup(&$controller) {
App::import('vendor', 'Mailer', array('file' => 'mailer/class.phpmailer.php'));
$this->parent =& $controller;
}

// Send an email
function send($to = null, $subject = null, $message = null, $attachments = array(), $riders = array(), $replyTo = null, $replyToName = '', $from = null, $fromName = null) {
// Set up mail
$this->phpMailer = new PHPMailer();
$this->phpMailer->IsSendmail();
$this->phpMailer->From = ($from) ? $from : $this->from;
$this->phpMailer->FromName = ($fromName) ? $fromName : $this->fromName;
$this->phpMailer->Subject = $subject;
$this->phpMailer->MsgHTML($message);

// Set reply to
if ($replyTo) $this->phpMailer->AddReplyTo($replyTo, $replyToName);

// Test mode switching, can also do domain based switching
if (true) {
$this->testMode = true;
}

// Add address(s)
if (!is_array($to)) $to = array($to);
foreach ($to as $address) {
if ($this->testMode) $address = $this->createTestEmail($address);
$this->phpMailer->AddAddress($address);
}

// Add rider(s)
if (!$this->testMode) $this->phpMailer->AddBCC('backup@example.com');
if (!empty($riders)) {
foreach ($riders as $r) {
if ($this->testMode) $r = 'tester@example.com';
$this->phpMailer->AddBCC($r);
}
}

// Add attachments
foreach ($attachments as $a) {
$this->phpMailer->AddAttachment($a);
}

// Send email
$success = $this->phpMailer->Send();
$this->phpMailer->ClearAddresses();
$this->phpMailer->ClearAttachments();
return $success;
}

// Send sample email with data from a form
function sendSampleEmail($data) {
$subject = "Sample Email - Contact from {$data['name']} ({$data['ip']})";
$contents = array();
foreach ($data as $field => $value) $contents[] = ucwords($field) . ": {$value}";
$contents = join("\n", $contents);
$message = nl2br("Dear Admin,

The following contact request was received.

---- Begin contact contents ----

{$contents}

---- End contact contents ----

{$this->sig}");
$this->send('admin@example.com', $subject, $message);
}
}
?>


The component has support for sending all emails to a testing account and centers around the send function. The send function parameters are pretty self explanatory with the only weird parameter being $riders, which is my unconventional way of saying bccs. Each call of the send function creates a new PHPMailer object, which may seem wasteful but in my experience prevents a lot of strange issues such as person B receiving email content intended for person A that was sent previously.

The sample email function in the component sends a very simple contact form to the administrator of example.com. The $data variable is provided by the controller and is then used to compose a unique email. The message that is composed is very plain HTML that is simply a string with all new lines converted to HTML line breaks. For those needing to send complex HTML emails I normally have the message data be retrieve by calling something like $this->parent->requestAction("/mycontroller/emailFor/12"); which means the email is rendered by just another function in a controller that can be styled and tested easily.

As the $this->parent refers to the calling controller one can use that to load models, perform finds, access submitted variables, etc. The only thing to keep in mind is the level of dependency between the component and the calling controller. Although I do not find this to be a problem as the component is more of a function-based container that makes managing email content easier. Under extremely simple circumstances one would have the email functions be stored in the relevant controller instead of in a separate component.

To use the component just include it as a component in your controller with:


var $components = array('Mailer');


And allow sending with a function like:


function contact() {
if ($this->data) {
$this->Mailer->sendSampleEmail($this->data['Contact']);
$this->Session->setFlash('Your message has been received. We will get back to you shortly.');
$this->redirect('/contact');
}
}


While the component is set up to use the default mail sender it is also possible to set it up using a hosted provider like Gmail. Just see my post on using Gmail with PHPMailer to view the configuration settings and place them in the "Set up mail" section of the send function.

Linux Tips 14: Changing the default cron editor to vim

|
For some reason a lot of servers I work with have set up the default cron editor, the one that is launched using crontab -e, to nano. As a long time vim user I find nano to be too slow and strange for quick editing. Therefore I found the need to set up the account to use vim instead. To enable vim for use in crontab is easy, simply execute the following line at the command prompt:


export VISUAL=vim

You can check that it worked using the command env The output of env should show VISUAL=vim if the export command was executed successfully, alternatively you can just execute crontab -e to see which editor loaded. If you want the setting to be set all the time you need to add the line to your .bashrc or .bash_profile, depending on your *nix distro.

Please note that you can also use any other editor that can be launched from the command line. Just replace vim with your editor of choice.

Improved Cake PHP debug log messages CSS style

|
As much as I like the CakePHP DebugKit component it sometimes appears to cause PHP fatal errors to not be shown, leaving me scratching my head at a white screen. At which point I need to remove the component inclusion code to see what I have done wrong.

In search of a lighter weight solution I went to Google and found a nice CSS style for the class cake-sql-log that automatically hid the SQL log table thereby removing much of the CakePHP debug mode output.

However the problem with the style is that when working with a lot of queries it is too easy for the SQL log to fill up the whole screen and make it very difficult to view the queries. Therefore I have included my modified version below that creates a zoom effect on cells that makes readable only the cell that you are hovering over. The CSS styling code required are:

.cake-sql-log { font-family:monospace; position:fixed; top:99%; z-index:100000; width:100%; background:#000; color:#FFF; border-collapse:collapse; }
.cake-sql-log caption { background:#900; color:#FFF; }
.cake-sql-log:hover { top:auto; bottom:0; }
.cake-sql-log td { font-size:3px; padding:1px; border:1px solid #999; background:#FFF; color:#000; }
.cake-sql-log td:hover { font-size:10px; background:#FFA; }

When the style is applied the default view of the SQL log will just show a minimal red bar at the bottom of the window:

And when you hover over the red bar the full log will be shown as with only the hovered cell readable:

Working with multiple UTC/GMT timezones in MySQL

|
When working with international timezones it is not always possible to work off UTC/GMT time. At times the webhost may prevent you from changing configuration settings or data may already exist that prevent you from updating the system timezone.

Luckily with MySQL's date functionality it is not too hard to work around this problem. In this MySQL tip I will show how it is possible to work with GMT timezones for any record with a little date manipulation. Please note that I assume the dates you are working with are recognized MySQL date formats of either DATE, DATETIME, or TIMESTAMP.

The first thing that you need to do is find out the time offset between your database and GMT. You can do this by using the query SELECT NOW() and checking the difference using a Google query. As an example the server I am working on shows the time as "2009-03-16 01:35:15" while Google tells me the current GMT time is "6:35am Monday". This means the GMT offset of the server is +5 hours (6 - 1).

The second thing that you need is for your users to tell you what their GMT offset is. For example I am on the Australian east coast so my GMT offset is +10. For demonstration purposes I will assume this value is stored in the "settings" table in the field "gmt_offset". Using this two bits of information we can get the user's local time by using the following query.




SELECT
DATE_ADD(NOW(), INTERVAL 5 HOUR) as gmt_time,
DATE_ADD(NOW(), INTERVAL (5 + gmt_offset) HOUR) as local_time
FROM
settings
WHERE
settings.user = 'Paul'

A more complex example where manual time offsetting is required is for time related user notifications such as in a calendar application. In this example I will show how a notification table where a user set notification time is stored in the server's default time can be used so that we can retrieve notifications that need to be sent out correctly.

SELECT
n.*,
DATE_ADD(NOW(), INTERVAL (5 + s.gmt_offset) HOUR) as local_time_start,
DATE_ADD(DATE_ADD(NOW(), INTERVAL (5 + s.gmt_offset) HOUR), INTERVAL 1 MINUTE) as local_time_end
FROM
notifications AS n
LEFT OUTER JOIN settings AS s ON (n.user = s.user)
HAVING
local_time_start <= n.notify_at
AND n.notify_at < local_time_end

The trick with the above query is we are treating notify_at as the user would expect, in the user's local time, therefore instead of adjusting it to the server time we simply calculate what the user's local time is at the current server time and see if any users need to have their notifications sent within this minute. While it is possible to shift the notification time to the server time and retrive required notification records based on that I find it more intuitive to leave user input as is.

Using the above method means that the system does not actually need to worry about any timezone issues unless the system in actioning a user set timezone as we can expect all users to work within their own. However such a method can mean more complications if your application's users are allowed to share date related information, in which case 3 offsets need to be combined (server offset, sharer offset, sharee offset) to show an accurate local time.

Managing semi-static select options in Cake PHP using table-less models

|
In Cake PHP a model normally represents a database table. However there are times when it is not necessary to create a table to store data, in most cases this data would be a semi-static enumeration, as the size of an application increases the use of such semi-static lists increases.

For example options such as "Yes", "No", "Active", "Inactive" would simply be stored as tinyints in MySQL however they may need to be presented to the user as "Yes" or "No". For Cake PHP to automatically generate a drop down list with "Yes" and "No" a foreign key would need to be used instead, adding an unnecessary join to make select generation simpler.

The solution is to create a model that does not use a table at all. For most of my programs I call this model Staticselect as its main purpose is to give me quick access to commonly used drop down options that I can set in my controllers for the FormHelper to use in views. The model code is:


<?php
class Staticselect extends AppModel {
var $name = 'Staticselect';
var $useTable = false;

// Person titles
function titles() {
$titles = array('Mr.', 'Mrs.', 'Ms.', 'Miss.');
return $this->toOptions($titles);
}

// Convert array to key value options
function toOptions($a) {
$os = array();
foreach ($a as $v) $os[$v] = $v;
return $os;
}

// No/yes options with int values
function noYesInt() {
return array('0' => 'No', '1' => 'Yes');
}
}
?>

The above code shows a few sample functions. The most useful is probably toOptions, which will take an array and turn it into a hash where the keys and values are the same so that when generating selects the value selected by the user is also submitted with the form. The function noYesInt is a simple example of data I commonly store in such a model.

For those less familiar with the way Cake PHP works with options I will also show how the model can be used in a controller and view. The controller function where the static options are to be used should contain a line with:



$this->set('nyOptions', $this->Staticselect->noYesInt());


While the view would use the data in a form input using the code:



<?= $form->input('Settings.hideEmail', array('options' => $nyOptions)) ?>

Converting Australian (little endian) date to ISO using just one line in PHP or Javascript

|
Working with dates can often be problematic when dealing with Australian localization (dd/mm/yyyy, little endian) as most server software tends to understand either ISO based (yyyy-mm-dd, big endian) formats or US formats (mm/dd/yyyy, middle endian). Having seen various solutions that involve regular expressions or reconstruction of date strings using string concatenation I realized there is a much simpler way to go from Australian dates to ISO dates that MySQL and PHP's strtotime will happily work with.

In PHP the code is:

// Original: '31-12-2008'
// $isoDate: '2008-12-31'
$isoDate = join('-', array_reverse(split('-', '31-12-2008')));


In Javascript the code is:

// Original: '31-12-2008'
// isoDate: '2008-12-31'
var isoDate = '31-12-2008'.split('-').reverse().join('-');


The above one liners can be expanded to turn the given Australian dates into usable time with just a little bit more code as well.

In PHP the code is:

$t = strtotime(join('-', array_reverse(split('-', '31-12-2008'))));


In Javascript the code is:

var t = new Date('31-12-2008'.split('-').reverse().join(''));


Looking at the above code it appears that it is actually easier to turn a little endian date into an ISO date than it is to covert a middle endian date into an ISO date as the array reverse trick will not be applicable when a flip is required between the first and second date elements. In such cases regular expressions would do the trick much more nicely.

How to check if jQuery is loaded

|
This post is a result of my analysis of my Google Analytics logs. It seems a lot of people need to know how to check whether jQuery is loaded. I will demonstrate two options here. To check if jQuery exists use:


if (jQuery) {
alert('jQuery is loaded!');
}


Alternatively you can check if jQuery is not loaded by using:


if (typeof jQuery == 'undefined') {
alert('jQuery has not been loaded!');
}

Javascript loading optimization with YUICompressor and output piping

|
The YUI Compressor is a Java tool that will take any Javascript file you give it and output a minified version. The minified version will remove all unnecessary characters as well as optimize the code so that it takes up as little space as possible; there is also a side benefit of code obsfucation that makes it a little harder for visitors to view your source code.

While using the minfier will improve the performance of your site, it is much better to reduce the number of asset requests by combining as many Javascript files as possible. This can be achieved using a script similar to the following:

#!/bin/sh
echo "Creating default layout min js ..."
java -jar ~/tmp/bin/yuic.jar jquery.min.js > layouts/min.default.js
java -jar ~/tmp/bin/yuic.jar layouts/default.js >> layouts/min.default.js

With the above script the two files that I use in the default template of my application is combined into just one file. Please note that the first compression line pipes the output by replacing, using >, the contents of min.default.js and the second (and any other lines) will append to min.default.js using >>. Please note that the script assumes that you have installed the YUI Compressor in your tmp directory, your actual YUI Compressor path and executable file name will be different.

The example I used is a very simple case that doesn't actually require such optimization. However if you have a complicated website that utilizes 10 different jQuery plugins that even if all are cached with change timestamp checking time of 200ms, the script can save almost two seconds from your page's loading time.

Linux Tips 13: Ubuntu Squid server setup how to

|
Squid is a Linux proxy server that allows you to cache requests to boost speed as well as allowing you to hide your real IP. This guide will cover how to set up a Squid proxy server in Ubuntu. The first step is to install squid using apt-get with the command: sudo apt-get install squid

With Squid installed you can modify the configuration file in /etc/squid/squid.conf Before being able to use the server the configuration file must be updated to allow your host. I have updated my proxy to allow access from my IP by adding the line acl our_networks src 1.2.3.4; you should replace 1.2.3.4 with your IP address. I added it to just above the line http_access allow our_networks, which I also removed the commenting of.

To enable anonymizing of your IP address you also want to uncomment the line: forwarded_for on, and change it to forwarded_for off which will make all requests through the proxy look as though it came from the proxy it self.

To test and use your proxy just check the proxy setting instructions for your web browser then set the server address to the IP address of your Ubuntu server and the port to 3128.

Object oriented event handling in Javascript using the jQuery plugin model

|
When coding my online password manager, Passbook, I found it hard to wrangle the Javascript code to fit into the many elements that could have been generated on the screen. It is not a new problem to me as I have worked with refactored combination pages that attempts to bring together what used to be 10 or so pages of functionality into an AJAX enabled super page. The result of which is normally a thousand plus lines of unmaintainable code.

For Passbook I decided to solve this problem once and for all. The solution I believe is in objectifying page elements as a block so that a panel with an edit and delete button can be duplicated quickly without having the Javascript code keep track of which panel on the page was clicked and trying to modify that page element. An object oriented approach would mean the page object could edit or delete it self because it knows what it is and what it represents.

While there are some existing solutions that use custom methods to streamline the object oriented process and work around Javascript's event target scoping of "this". I believe a better method existed that did not require so much prototype modification and was more self contained and flexible. My solution is to use jQuery's plugin model to control on page elements, or widgets.

To see the basic pattern it is easiest to first checkout the functional demo. The demo contains two main elements: a widgets container where an add action exists, and a widget controller that offers the user the ability to submit it or remove it. The demo shows the widget manipulating it self, its parent, as well as using a basic ajax callback within it self.

The sequence in which I would normally code this is to first create my HTML code. In this case it is very simple and consists of the following:


<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Object oriented event handling in Javascript using JQuery plugins</title>
</head>
<body>
<!-- HTML -->
<h1>My widgets</h1>
<div class="widgetContainer">
<div><a href="#" id="add">Add widget</a></div>

&nbsp;
</div>
</body>

Then the widget container plugin would be created with the add link hooked up to an event. By creating the widget container as a plugin it means the code is not restricted to any page or element. As long as the element ids or classes within the widget matches it can be turned into a functional GUI element. The basic plugin code with the onload initializer is:


// Widget container
(function($) {
// Widget container plugin
$.fn.widgetContainer = function() {
this.each(function() {
// Vars
var wc = $(this);

// Set events
wc.find('#add').click(function(e) { if (e) e.preventDefault(); add(wc) });
});
}

// Add a widget to the container
function add(wc) {
console.log("add clicked");
}
})(jQuery);

// Main
$(function() {
$('.widgetContainer').widgetContainer();
});

The above code is quite compact and by using the plugin model we can have multiple widget containers on the page without making any changes to the code. The benefits of object oriented event handling becomes clearer when we actually create the widget, which is designed to exist with other widgets in the widget container. The widget code is:


// Widget
(function() {
// Widget plugin
$.fn.widget = function(container) {
this.each(function() {
// Vars
var w = $(this);
w.parent = container;

// Set events
w.find('form').submit(function(e) { if (e) e.preventDefault(); submit(w) });
w.find('.remove').click(function(e) { if (e) e.preventDefault(); remove(w) });
});
}
$.fn.widget.template = '<div class="widget"><form action="" method="post"><input value="" type="text"><input value="Action!" type="submit"><a href="#" class="remove">Remove</a></form></div>';

// Remove widget
function remove(w) {
w.remove();
}

// Submit widget data
function submit(w) {
w.css('background', 'red');
$.post('/', w.find('form').serialize(), function(data) {
w.find(':text').val((new Date()).toString());
w.parent.fadeOut();
setTimeout(function() { w.parent.fadeIn() }, 500);
});
}
})();

The above code does not re-use the jQuery parameter because I have it residing in the original widget container code so that it is loaded at the same time. However if you decide to abstract it out into its own file that can be done easily by simply including the jQuery parameter in the last ();

The widget code follows the same pattern as the widget container however as one can see it does a lot more. The widget HTML is stored in $.fn.widget.template, however it can also be placed on page and retrieved using a jQuery selected on initialization, it all depends on how you want to balance dependencies and ease of editability.

The pattern works around Javascript's event limitations by passing the widget object into a new function attached to widget events. This is a simple way to be able to refer back to the object of interest and not just the event target. I will often include the event target (this) as a parameter for functions like submit when there is extra data that needs to be taken into consideration before actions are to be taken.

The final bit of code that ties the widget to the widget container is to update the widget container's add function. The add function needs to be updated to the following so that the widget is inserted into the widget container and initialized with all event handlers.


// Add a widget to the container
function add(wc) {
var widget = $($.fn.widget.template);
widget
.appendTo(wc)
.fadeIn('slow')
.widget(wc);
}

Putting it all together, the full javascript required to make a widget container that can add multiple widget becomes:


// Widget container
(function($) {
// Widget container plugin
$.fn.widgetContainer = function() {
this.each(function() {
// Vars
var wc = $(this);

// Set events
wc.find('#add').click(function(e) { if (e) e.preventDefault(); add(wc) });
});
}

// Add a widget to the container
function add(wc) {
var widget = $($.fn.widget.template);
widget
.appendTo(wc)
.fadeIn('slow')
.widget(wc);
}

// Widget
(function() {
// Widget plugin
$.fn.widget = function(container) {
this.each(function() {
// Vars
var w = $(this);
w.parent = container;

// Set events
w.find('form').submit(function(e) { if (e) e.preventDefault(); submit(w) });
w.find('.remove').click(function(e) { if (e) e.preventDefault(); remove(w) });
});
}
$.fn.widget.template = '<div class="widget"><form action="" method="POST"><input type="text" value=""/><input type="submit" value="Action!"/><a href="#" class="remove">Remove</a></form></div>';

// Remove widget
function remove(w) {
w.remove();
}

// Submit widget data
function submit(w) {
w.css('background', 'red');
$.post('/', w.find('form').serialize(), function(data) {
w.find(':text').val((new Date()).toString());
w.parent.fadeOut();
setTimeout(function() { w.parent.fadeIn() }, 500);
});
}
})();
})(jQuery);

// Main
$(function() {
$('.widgetContainer').widgetContainer();
});

Once again, please see the demo for the full source code and to see it in action. Hopefully this will make your development of Javascript based GUIs much simpler as it has done for me. I understand that it is not a perfect solution but it has served me well in my work and projects by limiting the scope of what I need to focus on into very manageable objects.

February 2009 traffic review

|
The progress of my blog has surpassed my expectations in February. My original quarterly goal for March was a high of 50 visitors a day. However on February 28 the Google Analytics traffic report shows a visitor count of 82. The traffic is still largely coming from Google search, with RSS subscriber numbers embarrassingly low. The traffic chart is below.

With my modest traffic goals met I have set a new quarterly goal, for the end of June, of archiving 200 visitors a day. Based on current growth of each week giving me a new high that is approximately 10 higher than the previous week, it should be achievable; although I am waiting for the figures to plateau as these things seem to do.

The traffic numbers also seem to correlate tightly to the number of posts I have. The current post count is 75. So once I hit 200 posts I should be able to achieve the 2nd quater goal I have set. While I understand this post is rather different than my regular posts on programming and tech hopefully this will help others put their ventures into building a blog in perspective in terms of realistic grow rates.

Simple PHP email testing with regular expressions

|
When working with system generated emails in PHP it can be difficult as email is often considered unique data in the system. To create and check various email accounts can become tedious and when testing on a live system the results of sending a test email to a client can be rather embarrassing.

The solution is a simple PHP function that you can call with any email address and it will convert it into a unique email address for an existing mailbox. The function will take an email address like realclient@client.com to myaccount+realclient@gmail.com Please note that the testing destination email address needs to support the + operator (ie Gmail).


// Create test email for use in testing mode
function createTestEmail($email) {
$email = preg_replace('/(.*?)@(.*)/', "myaccount+\$1@gmail.com", $email);
return $email;
}


To use the above function simply replace myaccount with your email address's name and gmail.com with your email's domain then put through it any email address your system sends to when you are testing. I would recommend automatically replace any destination address with the results of the function if the URL from $_SERVER['HTTP_HOST'] matches your development domain; this way the process is automatic and you don't need to worry about changing configuration or debugging flags.

Linux Tips 12: Ubuntu DNS server setup and configuration

|
Being on Linode, the service offers its own DNS server, and I know that many web hosts, VPS, and dedicated server companies also do the same. However there are real benefits in running your own DNS server, with editing speed and ease of use being one of them. Although for full disclosure I have decided to use Linode's DNS service to reduce load on my own server. Nonetheless, this guide will go through the relatively simple process of setting up a DNS server in Ubuntu Linux.

The first thing one needs to do is to install Bind. Bind is a file based DNS server that is pretty simple to use once you understand it; however there are multiple files to edit. When installed using sudo apt-get install bind9 a default configuration file is created for you as well.

The second step is to update the /etc/bind/named.conf.local configuration file to add our zone. Our zone specifies what domains this DNS server is responsible for. For this tutorial I will use example.com as the sample domain. Therefore in name.conf.local you will add both the zone definition as well as the reverse DNS entry for your IP. They should be written as:


zone "example.com" in {
type master;
file "/etc/bind/zones/example.com.db";
allow-transfer { any;};
};

zone "1.0.168.192.in-addr.arpa" {
type master;
file "/etc/bind/zones/1.0.168.192.db";
};


Please remember to replace example.com with your real domain name and 192.168.0.1 (written in reverse) with your real IP address.

The third, and optional step, is to configure some default DNS server options. The file used to do this is /etc/bind/named.conf.options The main settings that ought to be of interest are: forwarders, notify, and directory. Forwarders specify which DNS server should be used when your DNS server is queried for a domain that it is not responsible for. Notify specifies whether slave DNS servers should be notified of changes when they are made on this server. Directory specifies where DNS configuration files should be looked for if a full file parameter is not used in our zone entries in step two. Samples of three options are:


forwarders { 208.67.222.222; 208.67.222.220; }
notify { yes; }
directory { "/dns/zones"; }


The fourth step in our Ubuntu DNS server setup is creating our zone file. I am assuming that you did not specify a custom zone directory like the options example above. Therefore you will want to create your zone files in the folder /etc/bind/zones by just creating example.com.db and filling it with entries such as:


// TTL = Time to live for records on slave (2 days)
// 2009030700 = Serial for Bind to check whether an update has occured
// 6H = Time between refresh requests
// 1H = Time between retry attempts
// 1W = Expiry time for the record on slave
// 1D = Amount of time an invalid response is stored on slave
$TTL 2D
@ IN SOA ns1.example.com. root.example.com. (
2009030700
6H
1H
1W
1D
)

// ns1.example.com. = Name server
// mail.example.com. = Mail server
// www.example.com. = HTTP server
// *.example.com. = Wildcard entry
example.com. IN NS ns1.example.com.
example.com. IN MX 10 mail.example.com.
ns1 IN A 192.168.0.1
www IN A 192.168.0.1
mail IN A 192.168.0.1
* IN A 192.168.0.1


The above zone definition file sets some basic servers and points them to the computer with the IP address 192.168.0.1. You can host each service on a different IP if they are on different servers. You can also point to other name servers by using CNAME instead of A records. Please note that all domain names end with a ".".

While a reverse DNS zone file is optional, for things like mail servers if a reverse entry is not available it can be flagged as a possible spam server. So it is good practice to do it. For our example zone file the reverse would be in the file 1.0.168.192.db and look like:


// TTL = Time to live for records on slave (2 days)
// 2009030700 = Serial for Bind to check whether an update has occured
// 6H = Time between refresh requests
// 1H = Time between retry attempts
// 1W = Expiry time for the record on slave
// 1D = Amount of time an invalid response is stored on slave
$TTL 2D
@ IN SOA ns1.example.com. root.example.com. (
2009030700
6H
1H
1W
1D
)

IN NS ns1.example.com.
1 IN PTR example.com.


After the files have been created restart bind through the command /etc/init.d/bind9 restart and using the command dig @192.168.0.1 www.example.com to use your own DNS server to query the record www.example.com. If an answer is given (should look like your entry for www in the example.com.db file) then everything is set up correctly. You should now update your domain name registar's DNS records to point to your server.

Backup MySQL database to Gmail using PHPMailer

|
Sometimes client database data is so small that it is unnecessary to sign up to a dedicated 3rd party backup provider. In such cases Gmail becomes a rather useful tool for sending and storing backups as Gmail maintains a copy of all sent emails. This functionality can be achieved quite easily with PHP and PHPMailer. The script required to perform the MySQL database backup is below.


<?
// Import mailer
ini_set("memory_limit","128M");
require "/cms/phpmailer/class.phpmailer.php";

// Perform backup
$filename = "/tmp/" . date('Ymd-Hi') . ".sql";
exec("mysqldump -u root -p --all-databases > {$filename}");
exec("gzip $filename");
$filename .= ".gz";
$filesize = number_format(filesize($filename) / 1048576, 0);

// Send backup
$date = date('d/m/Y');
$m = new PHPMailer();
$m->IsSMTP(true);
$m->SMTPAuth = true;
$m->Username = 'username@gmail.com';
$m->Password = 'password';
$m->FromName = 'Database Backup Mailer';
$m->Host = 'ssl://smtp.gmail.com:465';
$m->AddAddress('secondarybackup@gmail.com');
$m->AddAttachment($filename);
$m->Subject = "Database Backup";
$m->MsgHTML(nl2br("The Database Backup for {$date} is attached, the filesize is: {$filesize}MB."));
if ($m->Send()) {
echo "Success";
unlink($filename);
} else {
echo "Error, could not send: {$m->ErrorInfo}";
}
?>


The script assumes that PHPMailer is installed in the same directory as the script in a folder called phpmailer and that MySQL has a password-less root account; change the line with mysqldump if your MySQL access details are different. Furthermore the script should be placed in folder where it can be accessed via HTTP so that you do not need to install the PHP CLI to make it work. The script can then be called using wget and added to cron to perform periodic backups.

Linode Ubuntu Mail Server: Part 3 - Stop Gmail putting your emails into the spam folder

|
In part 2 of my Ubuntu mail server set up guide on Linode I wrote about my experience with Gmail soft failing all emails sent from my server and my work around of just using Google Apps to send my mail. Such a solution works well if you are only dealing with end-user accounts however I have recently had to use the system SMTP server to send out emails for my online password keeper application, Passbook.

The solution to getting your emails out of Gmail's spam filter and into users' folders is to set up a SPF TXT record. You can do this by using the Open SPF wizard or just use the settings below in your Linode DNS Manager.


To create those settings simply add a new TXT record and leave the name empty and use "v=spf1 a mx ~all" as the value. What that TXT record means is that it is a SPF record saying that for your domain all servers pointed to by an A or MX record can send emails on behalf of your server. The ~all means that any other server that sends email on behalf of your domain may or may not have permission to do so. To disallow any other hosts from sending email with an @yourdomain.com address use -all and to allow use ?all.

After you make these changes you can query whether they have worked using dig @ns1.linode.com TXT yourdomain.com from a *nix command line. The answer section should show the TXT record. You will need to wait for this data to propagate (I waited about 3 hours) before testing by sending a brand new email to your Gmail account. The email needs to have a new subject and message so that Gmail does not flag it as being the same as a previously marked spam message.

Linux Tips 11: Ubuntu user management

|
One thing that I find myself doing a lot on my Linode server is adding and managing users. Sometimes I need to create accounts for clients, friends, or family. So some accounts are temporary and some are persistent. Nonetheless, they all require the use of the following few simply Ubuntu user management commands.

Ubuntu has four basic user management commands they are: adduser, deluser, passwd, and usermod.

Adduser is a script that presents common functions of useradd in a user friendly manner, which basically allows you to add a user to the system. To use it type (as root): adduser [newuser] and you will be prompted with basic user information such as name, password, etc. The user home directory will be created for you as well.

Deluser is the equivalent script to adduser. If called by it self it will simply delete a user's account, thus revoking their system access. If you want to completely remove a user you need to call: deluser --removehome --remove-all-files [new user] which will remove a user's home directory as well as all files that belongs to that user.

Passwd gives you much greater control over a user's password in terms of policy enforcement. You can lock an account passwd -l [newuser] and unlock it using passwd -u [newuser] You can also force the user to change their password immediately using passwd -e [newuser] or after a number of days passwd -c 7 -x 30 [newuser] (the user must change his/her password every 30 days, with a warning shown 7 days before).

Finally you can modify existing users' settings by using usermod. You can add them to new group usermod -a [newgroup] [newuser] or change their home directory usermod -d /home/newhome [newuser] or change their login name usermod -l [newnewuser] [newuser]

I have only gone through the most common use cases for myself with these Ubuntu user management functions, you can find more information as well as a list of all options from their man (manual) pages.

Javascript GMT/UTC timezone offset detection

|
Continuing from my previous date related PHP tip here is a very quick tip about how to detect the GMT/UTC timezone offset in Javascript. It really only takes two lines of code.


var today = new Date();
var offset = -(today.getTimezoneOffset()/60);


The above code will place the offset in hours in the offset variable in whatever context you made those two calls.

PHP dates between two dates tutorial

|
For any calendar or time management application it is useful to be able to generate dates between two given dates. Doing it is actually quite easy. Just see the code below for the function.


function getDaysInBetween($start, $end) {
// Vars
$day = 86400; // Day in seconds
$format = 'Y-m-d'; // Output format (see PHP date funciton)
$sTime = strtotime($start); // Start as time
$eTime = strtotime($end); // End as time
$numDays = round(($eTime - $sTime) / $day) + 1;
$days = array();

// Get days
for ($d = 0; $d < $numDays; $d++) {
$days[] = date($format, ($sTime + ($d * $day)));
}

// Return days
return $days;
}

The above code accepts a start and end date that is understood by strtotime. The function returns an array of days, the format of which is specified in the $format variable which accepts all formats used by date.

To use the function simply call it like getDaysInBetween('2009-03-01', '2009-03-06'); Please note that the results are inclusive of the original start and end dates.

Linux Tips 10: Rename multiple files

|
Renaming multiple files in Linux is surprisingly difficult given the simplistic power provided by many other system commands. Unlike DOS, which provided a rename command that allowed wild cards, Linux's rename/mv command is less versatile. Therefore to rename files one needs to write a loop. Luckily bash helps a lot here.

Lets say we have in our directory a number of .txt files that we need to rename to .nfo. To do this we would need to use the command:

for f in *.txt; do mv "${f}" "${f/.txt/.nfo}"; done;
It is quite a long command, but it basically executes a loop and tells it to take each f file name in *.txt and give it a name where a match for .txt is replaced with .nfo. While I have not tried more complex patterns it should be possible to use any regular expression. Please note that the above code only does one replacement. If multiple is needed then two slashes are required after f, ie. {$f//.txt/.nfo}

Generate your own free SSL certificate in Ubuntu

|
For my Passbook project I wanted it to work over a secure connection, however I did not want to pay the $50 to $500 per year that I would've needed to pay for a single site or wild card certificate. Therefore I decided to create my own.

Please note that before following this guide you need to know self signed certificates will cause error like messages to be shown to your website visitors because the browser cannot identify the certificate issuer (you). Also, while I did this on my Ubuntu server most of the commands will work as long as you have OpenSSL installed.

For this example I will generate a wild card certificate for my site: 24 Hour Apps; therefore all certificate related file names will be 24ha.

The first step to create a home for your certificate files; I did this in my root home directory. Then generate your RSA private key. The commands achieve this are:

mkdir ssl
cd ssl
openssl genrsa -des3 -out 24ha.key 1024

You will be asked a few questions. Fill them out as accurately as you can. You will also need to set a password for your private key. Please remember this as you will need to later on.

The next step is to generate your own certificate signing request. You can do this with:

openssl req -new -key 24ha.key -out 24ha.csr
You will be prompted to enter the password you typed before for your private key. Enter it and create your CSR.

The following step is optional and removes the password from your private key so that when you launch Apache with mod_ssl you do not get requested to type in a password. For servers with monitoring software that automatically restarts processes this is quite handy. The code for removing the password is:

cp 24ha.key 24ha.key.original
openssl rsa -in 24ha.key.original -out 24ha.key

Please note that your original key still exists and is now called 24ha.key.original.

We can now generate our SSL certificate with the command:

openssl x509 -req -days 365 -in 24ha.csr -signkey 24ha.key -out 24ha.crt
You will be promoted to answer more questions. The most important answer you give will be to the question "Common Name (e.g., YOUR name)", you need to enter your website address ie. www.example.com or for wild card entries *.example.com

Now that we have our certificate we need to make it available to Apache. This part of the guide becomes more Ubuntu specific as other Linux distributions will have their Apache files located elsewhere. However, to give Apache access the first step is to copy the SSL files over and enable mode SSL. To do so type:

cd /etc/apache2/
mkdir ssl
cp ~/ssl/24ha.key .
cp ~/ssl/24ha.crt .
a2enmod ssl


Now we need to enter a virtual host entry for our SSL enabled domain. The following is an entry I have in the file /etc/apache2/sites-available/passbook.24hourapps.com

<VirtualHost passbook.24hourapps.com:443>
SSLEngine On
SSLCertificateFile /etc/apache2/ssl/24ha.crt
SSLCertificateKeyFile /etc/apache2/ssl/24ha.key
ServerName passbook.24hourapps.com
DocumentRoot /home/passbook/www/

<Directory /home/passbook/www/>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
</VirtualHost>

Once you have created your virtual host entry restart Apache (using /etc/init.d/apache2 restart) and test your new secure site. For my example this is https://passbook.24hourapps.com

If all went well you should see the Firefox's, or whatever browser you are using, message saying the secure connection has failed due to an unknown issuer. You will need to add an exception for the certificate before viewing the secure page. Adding the except is a 3 or 4 click process that is not very intuitive. However once it is done you can have cheap secure connections between your server and your web browser.

Linux Tips 9: Find recently modified files

|
Back when I was developing at a company where no version control systems were used and CSV was the pain it still is now, going without a system at all was preferable than trying to get a working CSV system. However as expected I often find my self unable to find some code that has been overridden by a college and would find it difficult to locate the files he had changed.

Luckily find, the Linux command, is quite powerful and can show you a list of all recently modified files. For example, if you come in to work on Monday and found out that a weekend coder was brought in and made some changes without leaving any documentation all you would need to do is type:

find . -mtime -3
The above code will find files that have been modified less than 3 days ago. To be more specific and just check for the file change time, use:

find . -ctime -3
To be even more specific you can set a date range by using:

find . -ctime +1 -a -ctime -3
Which will find all files changed, and thus modified, at least one day ago but within three days ago.

For those wanting to check whether their employees are doing what they said they are, doing this check is also an easy way to see just how many files were updated to implement the "super comprehensive overhaul" that just had to be done.

CakePHP model component for quick model actions

|
CakePHP's scaffolding controller actions provide a very nice template for basic model actions such as add, edit, delete. However my problem with them is that they are too verbose for something used so often. Therefore I set out to abstract them into their own component that can achieve the same code that an entire scaffold action does with just a few lines.

The results of my efforts is the model component, available in full at paste2.org. Please go there to see and download the source code. I will just outline its features here. Please note that the component uses a global function called dot, which is simply a dot notation array access function I wrote. Please see my PHP array dot notation access article for the source code to that.

The component contains two main methods for use. They are add and edit. Both methods will automatically check the parent controller's data variable, which is automatically filled if you used CakePHP's form helper for all form inputs. Add and edit are basically the same method with add calling the create method on a given model prior to saving and edit checking that an id is provided before saving.

Add/edit accepts the following parameters: $modelName is the name of the model that is to be saved, ie. User. $params is an array with the following optional items:
  • success: Sets the flash message to display on success
  • successUrl: Redirects to this URL after the model data have been successfully saved
  • fail: Sets the flash message to display on failure
  • failUrl: Redirects to this URL if the results of the save is false
A sample call from a controller looks like the following:

$this->Model->edit('User', array('success' => 'Your account details have been saved', 'successUrl' => '/messages/thanks', 'fail' => 'Sorry, your account details could not be saved please correct the following errors and try again'));
That code will redirect the user to /messages/thanks and show the success message while leaving the user at the current page to correct any errors that were detected by the User model.

The component also auto-loads any model that it does not find in the controller so one does not need to worry about specifying the model in the $uses array if the model will not be used often. However I find that if you are performing creation and update actions then you will need to load the model for the viewing page anyway.

It is possible to extend the component to support delete as well although I do find that just calling the default del method is simple enough, which is why I did not include it in the component.

Linux Tips 8: Time stamp file names for backups or archives

|
When doing simply backups (non-incremental) it is useful to be able to automatically time stamp or date stamp file names. This makes archives much easier to navigate when it comes time to retrieve a backup.

The command for doing this is quite simple. To just create a file with a time stamped file name, use the following command:

touch myfile-$(date '+%Y-%m-%d-%T')
The command will produce a file called myfile-2009-02-25-17:32:01 You can also use it to make backups when combined with tar. The following code will create a date stamped gzipped tar file of a directory called test.

tar czvf test_backup_$(date '+%Y%m%d').tgz test/
Using the above commands with cron can allow you to quickly create a very simple, yet effective, backup system for any important folders that you need to archive.

Switching CakePHP debug mode based on URL

|
For me I like to maintain at least two versions of any code base. One for development and one for production. For larger projects where an approval process tends to occur slowly one would also need to have a client testing/staging version as well. Each of those versions ideally should operate differently and use at least different databases.

To manage all these versions and configurations one could branch and update the database configuration files manually each time. However I believe the best way is to have the code detect what URL it is in and change settings based on the URL detected. Therefore if the cake application is being access from http://dev.example.com it will retrieve all data from the development database.

Setting this up is quite easy and I do this before I begin every project. The first step is to alter app/config/core.php, which is where the debug mode is normally specified. In it you will want to include a function before everything else to detect whether or not the application is being accessed from the development URL. The code required is below:

function dev() {
if (isset($_SERVER) and preg_match('/^dev/', $_SERVER['HTTP_HOST'])) {
return true;
} else {
return false;
}
}

Then you will make your core config file's debug mode line be conditional based on the results of your dev() function.

if (dev()) {
Configure::write('debug', 2);
} else {
Configure::write('debug', 0);
}

By placing the dev() function in the core configuration file it means that it will be available to all files in your application. The config file is imported even earlier than the bootstrap file.

To make your database configuration switch based on the URL create the following function in app/config/database.php

function DATABASE_CONFIG() {
if (dev()) {
$this->default['database'] = 'mydevdb';
}
}

The above code assumes you have created a mirror development database the same as your live version with the same login and password.

The dev() function can also be used in emails, which allows for simplified email debugging, which I will write about in another tip, but if you want a quick method simply update any destination email with your own to allow you to test all email functions with live data without having puzzled clients.

Project alpha: Passbook - 90% complete, finally

|
Wow, it seems like not so long ago that I dreamed of having this project done within a week. Now, almost two months later, it is still ongoing. The project is now functionally complete and that is why I am posting about it. I personally would use it now, although I will not until I move it away from the development only URL and on to a live URL. Once that is done I look forward to moving away from Google Notebook as my main password keeper.

Spending a few hours a day on the project has not been easy. With an eight hour work day, family, friends, and spouses to manage it is not simply a matter of just sitting down and working. Juggling everything have been a great learning experience for me and although I did not achieve what I originally wanted I do believe that by setting myself up for failure, in a limited sense, has been helpful in making me achieve far more than I otherwise would've.

The site really is only 90% functionally complete. I still need to write a lot of copy and do a lot of screen shots so that other people can understand my vision for the program, which is to be the easiest to use password keeper online. That is why I have added a lot of AJAX candy to make actions such as saving, editing, etc. as fast as possible. My goal is to be able to enter a new account within about 30 seconds of me logging in. With my limited testing I believe I have achieved that.

I key decision for me with regards to the completion of this project was whether to allow myself to feature creep on the project. While I have been "editing", to copy a commonly used phrase on Project Runway, a lot I have allowed two features through that I felt were important enough to launch with. They are: SSL support and export functionality. My decision to implement those features are purely based on the fact that I would not sign up to a password keeping service unless I knew they supported SSL (although I do not always use Google Notebook with SSL enabled; but they are Google and they have already established a high level of trust with me), as well as allowing me to leave without thinking twice about it.

Therefore I have implemented SSL, with my own signed certificate as there is no budget for a paid one, and will implement CSV export this week. Although I am unsure how successful the project will be I am really quite glad to just be able to scratch my own itch. That is what I believe development should be about. Projects of passion whose success or failure is inconsequential because it fulfills a need you feel is not adequately being met by whatever is on the market now.

iPhone calendar recreated in jQuery and HTML

|
Stefano Verna has created a very nice iPhone calendar like display using just jQuery and HTML. The calendar is constructed with a basic style-less table serving as its base with hover over popups showing even information for dates of interest. It is both a visually elegant and very easy to implement using only 64 lines of Javascript. In theory it should be very easy to convert this to a plugin so that it can be reusable in an AJAX context with dynamic paging and re-initialization. The screenshot is shown below.

The only problem I have with it is the it is not a printable calendar by default. However the apple calendar style should be easily ported into a printable CSS stylesheet.

Linux Tips 7: Pushd and popd command how to for easy directory traversal

|
Moving around directories can sometimes be a real pain in Linux/Unix/OS X. With more descriptive directory names and source code trees that group all sub-folders in a MVC pattern means that you will often need to jump between three or more directories to edit the one page in your application. Therefore it is very important to be able to quick jump between directories. One way is to always start at the base and re-launch your cd commands from history. The alternative is to use the Linux command to enable quick traversal between directories of interest.

The basic commands in this how to are: pushd and popd. They work based on the concept of a stack and you can cd into directories using pushd, which adds the new directory on to the stick, and return to previously visited directories using popd. An example is shown below:

[1014 paul:~] pushd Downloads/
~/Downloads ~
[1015 paul:~/Downloads] pushd ~/Documents/Contracting/
~/Documents/Contracting ~/Downloads ~
[1016 paul:~/Documents/Contracting] dirs -v
0 ~/Documents/Contracting
1 ~/Downloads
2 ~
[1017 paul:~/Documents/Contracting] popd
~/Downloads ~
[1018 paul:~/Downloads] popd
~
[1019 paul:~]

The above code shows me going from my home directory (~) and navigating to my downloads folder, then my contracting folder. You can see with each pushd command it is giving me a view of the stack, with the most recently executed pushd target on top (top being left most). You can also explicitly view the stack by executing the dirs -v command. Finally I use two popd
commands to pop items off the stack which brings me back to my home directory.

PHPMailer UTF8 unicode charset support in email messages

|
I recently came across a problem where email being generated from our system contained user input that used the UTF8 unicode charset. PHP UTF8 support is normally pretty good however it seems that PHPMailer does not do automatic charset detection on outgoing emails. The solution though is quite simple. Just set the charset to UTF8 manually in PHPMailer and UTF8 encoding will be used. The code required is:

$myPhpMailerObj->CharSet = 'UTF-8';

Project alpha: Passbook - Tag clouds Cake PHP tutorial

|
A key feature I wanted for Passbook that I found lacking in other password applications is a tag cloud of my password groups/lists. A HTML tag cloud is actually a very simple element. All it contains is a list of links with classes that adjusts the font size based on the weight of each link. In the case of Passbook the weight is determined by the number of passwords in a group. I will start by showing the end result:
From that it is easy to see that the group efwfe (yeah, I am pretty lazy with test data) has the most passwords and is what I am most likely to refer to often. Less popular groups fade into the background with their smaller font so that focus can quickly be placed on the high weight items.

The first step in creating a tag cloud is setting up how you want your fonts to look. I have the following CSS styles set for the font sizes (.c1 to .c5) as well as the spacing between tag items (.cloud .a).

.c1 { font-size:100%; }
.c2 { font-size:120%; }
.c3 { font-size:140%; }
.c4 { font-size:160%; }
.c5 { font-size:180%; }
.cloud a { margin-left:0.8em; }
.cloud a:first-child { margin-left:0; }

Calculating the weights is a bit more involved. I wrote the following function in my Pgroup model, which is what my main tag cloud elements of password groups are, with each PGroup model containing a cache field called pitem_count that stores the number of password items belonging to that group. The function setCloudWeights(groups) takes the output of a $Pgroup->find('all') and sets the weights.

// Set cloud weights, expects $groups to be results of find('all')
function setCloudWeights($groups) {
// Vars
$cloud = array('min_weight' => 1, 'max_weight' => 5);
$ceil = max(array_keys(Set::combine($groups, '{n}.Pgroup.pitem_count')));
if ($ceil < $cloud['max_weight']) $cel = $cloud['max_weight']; $floor = round($ceil / $cloud['max_weight']); for ($gi = 0; $gi < g ="&"> $cloud['max_weight']) $g['cloud_weight'] = $cloud['max_weight'];
}

return $groups;
}

The function returns an array containing a cloud_weight attribute that is between 1 and 5, which we can use to reference the correct CSS style. The function uses minimum and maximum weights to stop all tag items looking like the maximum font size; the maximum is also used to scale down counts into weights that fit between 1 and 5 when a tag item with more than the preset max is found. You can change 1 and 5 to anything you want and the code will do the rest. Although you will need to create CSS class for n..m depending on what you change 1..5 to.

The function makes very little actual change to the result set created from Cake PHP therefore you can still use it normally. Therefore to generate the cloud I use the following code in my view. It assumes that you have put the results of setCloudWeights(...) into a view variable (using $this->set(...) in the controller) called $cloud. The view code is shown below.

<div class="cloud">
<? if (empty($cloud)): ?>
You have no groups right now.
<? else: ?>
<? foreach ($cloud as $group): ?>
<? $g =& $group['Pgroup'] ?>
<a href="<?= $html->url("/passwords/{$g['name']}") ?>" class="c<?= $g['cloud_weight'] ?>"><?= $g['name'] ?></a>
<? endforeach; ?>
<? endif; ?>
</div>

The code provides the basics of a tag cloud by creating our list of links and setting the link class between c1 and c5. You will need to change the model names and attributes to suit your circumstances, however implementing the three sections should not be too difficult. If it is, then leave a comment and I'll try to help our as well as improve on this Cake PHP tutorial.

Class design using the behaviour tree diagram system to create jQuery plugin class models

|
In a previous post I wrote about my experience using behaviour trees diagram system with Javscript. The result of which was quite good. However I still found the way I was coding the behaviors from the chart into the function to be quite inefficient. Furthermore the singleton class design I was using to implement my the Javascript functionality in Passbook did not fit well into the object oriented paradigm that behaviors trees were intended for, which meant that my class models became much larger than it needed to be.

One idea that sparked my interest in further research into create good Javascript objects was seeing how a complex object such as a jQuery calendar date picker could be have a class model that was completely modularized to the point where one could attach it to pretty much any input on the page and have it set up to fill data for that input. My research led me to find the basic pattern on how to create a jQuery plugin. The jQuery plugin scoping actually makes coding objects in jQuery very easy with minor syntactic overhead. Using my previous example the design that combined state and actions can be combined into one object and method.

State:View > Action:Edit > Behavior:View.Title.Hide, Editor.Attach(View.Title)
State:Edit > Action:Save > Behavior:Editor.Controls.Hide, Editor.Loading.Show, Editor.Form.Submit

Can become:

As you can see, the design now becomes a nice bridge between the mockup and the final class model. The abstraction of the actual code becomes more relevant and less confusing by removing state information. And this charge can be created by simplifying reviewing mockups and all user interaction elements on the mockup. For those interested the above chart was created using the free and very fast mind mapping tool MindNode (OS X).

I will write about how to create fully modular object oriented code in jQuery using the plugin pattern in another post. However I can state now that it took me a few implementations before I was fully rid of the singleton class design habit that was developed from reading too much simple Javascript even handling examples. For now I will show you skeleton plugin code for the above object, which is really very elegant. Please note that the object name is now objectt to avoid conflicts with language reserved words.

(function($) {
$.fn.objectt = function() {
this.each(function() {
// Initialize object
var obj = $(this);
obj.title = obj.find('h1');
obj.loading = '<div><img src="loading.gif"/></div>';

// Events
obj.find('.edit').click(function() { edit(obj); });
obj.find('.save').click(function() { save(obj); });
});
}

function edit(obj) {
// Do stuff with obj here
}

function save(obj) {
// Do stuff with obj here
}
})(jQuery);

The above plugin can be called by calling $('.myobject').objectt(); when the document is ready and it shall be transformed from a simple HTML element into a Javascript enabled interactive object. Furthermore unlike singleton class models you can use this object on multiple matching elements and scoping and referencing the correct elements will not be an issue to be worried about.

PHP mail using GMail SMTP and PHPMailer

|
I have been a fan of PHPMailer for quite some time. It is a very easy to use class that offers much more power than the regular PHP mail function. Although one problem I have had PHPMailer have been its rather poor documentation. Methods and attributes are split up with examples and tutorials seemingly overlapping but disconnected as well. However this is a small fault for such a great contribution to the PHP development community.

A thing I found hard to find was a specific example of how to set up PHPMailer to work with a GMail SMTP server. The results are actually quite simple and I managed to get it working with only a few tries. However if you are reading this you probably do not want to waste that 10 minutes, so I'll just show you the sample code below.

$pm = new PHPMailer();
$pm->IsSMTP(true);
$pm->Host = 'ssl://smtp.gmail.com:465';
$pm->SMTPAuth = true;
$pm->Username = 'yourusername@gmail.com';
$pm->Password = 'yourpassword';

The above code shows all the configuration settings that need to be set. I will not reproduce the normal message attribute settings needed for sending mail in PHP with PHPMailer as you can find them at the PHPMailer website.

Replicate Textmate html editor's code completion functionality in ViM with the close tag plugin

|
Being a long time Vim user I have rarely been enticed by another source editor as Vim really is the Swiss army knife of editors. I was surprised when trying out Textmate to find that it really does make editing HTML and code much easier with its range of auto code completion rules. My problem comes from HTML and XML's need to close tags, which when compared to programming languages, are quite verbose. Closing tags are hard to type because they require the use of the shift key for both the start and the end of the tag. Whereas most programming languages will allow you to close with just a } or end.

Pure HTML editors like Dreamweaver do solve the problem with their auto tag closing however Dreamweaver is limited by its focus on web page, and some web application, development environment. Textmate on the other hand appears to take a finely balanced approach between offering time saving automation and high extendability. But Vim, being the king (in my humble opinion anyway) of extensible text editors does have a plugin that allows users that want to utilize it as a great free HTML editor to benefit from the automated code completion offered by others.

The solution lies in the closetag.vim plugin written by Steven Mueller. The plugin is not fully automatic like previously mentioned HTML editors but it does offer the same functionality when the [ctrl]-_ (underscore) keys are pressed. I personally find this to be a better solution than another auto tag closer I found previous, which would have issues with code formatting. Using closetag means I can have the cursor where I needed then just press the mapped keys to close off whatever HTML section I was working on.

To install the plugin simply download the file and place it in your ~/.vim/scripts directory; which you may need to create. Then just include the following line to your ~/.vimrc file.

:au Filetype html,xml,xsl source ~/.vim/scripts/closetag.vim
For Cake PHP coders just add ,ctp to the end of html,xml,xsl. Other file types can also be added the same way.

Linode: Upgrading Ubuntu 8.04 LTS to 8.04.2

|
Ubuntu have recently released an update for their long term service release of 8.04, the updated release is versioned as 8.04.2. According to Ubuntube.net, there are over 200 updates that include security and general bug fixes. Being a LTS update, instead of a distro-upgrade, special care were exercised to ensure maximum compatibility and disturbance of existing systems. I found this to be true as my system ran file after the update.

To update your Linode Ubuntu 8.04 release, or any Ubuntu 8.04 release, first check that you are infact running this release by going to /etc and executing: cat *release You should see output similar to:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=8.04
DISTRIB_CODENAME=hardy
DISTRIB_DESCRIPTION="Ubuntu 8.04"

Once you have confirmed you are working with the correct version, type the following commands:

sudo apt-get update
sudo apt-get upgrade

The first command updates your apt-get repository records, the second performs your required upgrade. After the upgrade has completed you should see the output of cat *release as:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=8.04
DISTRIB_CODENAME=hardy
DISTRIB_DESCRIPTION="Ubuntu 8.04.2"

The upgrade is now complete. Ideally you should test some websites, especially ones that uses a lot of non-standard packages, to make sure that the upgrade has not negatively impacted them. Although in most cases upgrade issues should be non-existent.

Backing up and restoring OpenPGP keys in Ubuntu

|
I have recently set up Amazon S3 backups for my server using Duplicity; something I will write about later. A key feature of Duplicity is that it can encrypt your backups so that no one can really access your files. The upside to this is that your data will be not compromised, the downside is a more difficult backup restore process.

Encryption is implemented in Duplicity using OpenPGP. OpenPGP is basically an open public key cryptography tool similar to what is used to verify SSL certificates. You can see all your OpenPGP keys by executing the following command:

gpg --list-keys
The command should return results similar to:

/root/.gnupg/pubring.gpg
------------------------
pub 1024D/484808AA 2009-02-14
uid Paul Chiu
sub 2048g/780E7E92 2009-02-14

What you are looking for is the 484808AA string. This is the key id for a particular key. My output shows that I have only one key so there are no other lines beginning with pub 1024D. Using this id you can export your key with the following command:

gpg -ao public.mypgp.key --export 484808AA
The command will output public key id 484808AA and store it in public.mypgp.key file. It is important for you to keep this file somewhere safe as you will need to use it during restore. For those interested the contents of your key file should look like:

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.6 (GNU/Linux)

mQGiBEl....
-----END PGP PUBLIC KEY BLOCK-----

My actual key has 30 lines with the encoded block consisting of about 25 lines.

The next step is basically the same, however you are using slightly different commands to get your private key. The commands are:

gpg --list-secret-keys
gpg -ao private.mypgp.key --export-secret-keys [key id]
Another difference that should be noted is that secret key lists start with sec instead of pub. Otherwise using the secret key commands works basically the same way as the public key commands.

Restoring the keys is a very easy process. You may wish to restore when you migrate your server/computer or when you need to recover from a system failure. The commands you need to use are:

gpg --import public.mypgp.key
gpg --import private.mypgp.key
After restore you can execute the key listing commands again to check that everything was restored properly. You can now continue using your encryption programs such as Duplicity to fully restore a system or create new backups that are compatible with the old.