All posts by JCF

My favorite ssh client

Writing this note today to mention my favorite ssh client for windows.  My unix admin duties have required me to use ssh for years and I have tried many clients including puTTy, kiTTy, ZOC and secureCRT.

I have various problems with the mentioned clients as they don’t all meet my requirements.

My ssh client must:

  • save credentials (not just keys although that is a good practice)
  • save sessions
  • have a tabbed view
  • handle editors nicely with color including nano and vi
  • allow one click or command line access
  • integrate into windows nicely
  • allow me to transfer files in and out over the same connection

I presently don’t use my ssh client for proxy type connections although I suppose that is a valid reason for many.

After many years I found Xshell from NetSarang (http://www.netsarang.com/download/down_xsh.html).  It meets all of my requirements and comes with a 30 day eval or full school/home license.

xshell 4

~JCF

Custom Schema Updater

1MB Corporation has created a custom e-commerce updating and deploying technology using something called a schema updater.  What we have done is create a portable website that will automatically deploy on another web server.  It works very much like a one click installation such that you supply a database, a web folder and upload a few setup files.

Once the site is created we send some data and files over to the remote site including html templates, css, images, categories, shipping, navigation and products.  Automatically creating a web site so that you can start selling instantly.

You might think to yourself that is pretty great and it is.  What makes it special is the ability we have to heal our software with updates. We essentially run a master preview copy on the main web server and can test all of the previously deployed websites against the latest version of the software.  Whenever we add a new feature or fix a bug we are able to push that update to all of the remote websites at once.  Self healing and updating updates means that our clients don’t have to do that “technical stuff” since we do it for them.

schemaTo make it all work we built an interface that looks at the preview copy and selectively grabs tables and files from the preview, adds some stuff and packages everything into what we call a schema.  Using our custom deployment software we can rebuild a large 1000 item website in minutes and upgrade it in seconds.

Imagine how great it is to purchase a website from us that gets better over time.  We are constantly adding features for everybody to use including marketing to social media, video support, automatic site maps and dynamic navigation.  If we ever make a mistake we just downgrade to the older version of the software and everything continues.

Email info at 1mb.ca for more information.

~JCF

How to Make a Virtual Machine (VM)

I’m going to show you how to create a windows 8 computer with 3 easy steps.

  1. Create
  2. Edit
  3. Install

A virtual machine is basically a computer that is inside of a computer. There are many intrinsic benefits from running a VM such as portability as there is usually only 1 or two files and it can run on many different kinds of computer such as Apple, Linux and Windows.  I intend on keeping my VM on a USB so I can bring it with me and not carry a giant laptop.

There are many options when it comes to virtual computers.  They have been around for years and constitute a very mature technology now.  I chose to use VirtualBox from Oracle.  There is a free license available and offers a lot of features such as snapshots and a full hypervisor that runs on Windows, OSX, Linux and Solaris. You can get it from VirtualBox.org. Download and Install, you know the drill. Once it is installed you can fire it up and create all sorts of VMs.  I was even able to create an android phone on an airbook.

oracle VM VirtualBox

The last VM I created was OSX.  I configured it to run full seamlessly in full screen and works quite well for my testing and to run objective-C on windows. For the purposes of this tutorial I have opted to create a windows 8 VM since I already have an apple one and I run my droid VMs inside of Eclipse instead of standalone VMs for developing convenience.

macwinskin

 

Step 1 – Create a New VM. I clicked new, chose a title, 2GB of RAM and chose to create a new hard drive [25GB] that was the default type of “VDI” but you could select many other options such as VMWare (VMDK) or Microsoft (VHD).  I prefer VDI since that is the hypervisor I am running in this case. For everything else I chose the defaults such as dynamic allocation and finished the wizard.

step 1 - name

Step 2 – I edited my newly created VM and added network support [default settings].  I want to be able to activate windows and eventually host a suite of tools. I went to the storage section and mounted the windows 8 ISO to the virtual CD/DVD drive.  If you have the actual DVD, you can easily mount it to your parent computers drive.

step 2 - mount ISO

Step 3 – Install Windows.  Start the VM on and install windows.

step 3 - install

That’s it, it wasn’t that hard was it?

~JCF

Wearable Computers

It seems there is a new trend that is sweeping the planet that will be changing the course of history as we know it.  Wearable computers!  I saw something like this many years ago on TED and it seems now it is becoming a reality.

There are rumors of a new iWatch coming from Apple Inc that evidently will monitor your health and allow you to use that information to track your overall exercise activities over time.  I wonder what sort of implications this could have on the medical industry?

glass

 

And of course there is the brand new Google glass that is getting a lot of press these days.  There has already been a law passed in Virginia that you cannot wear the glasses while driving and the the device isn’t even available yet… who said the legal system was slow?  There are some obvious business establishments that have issued a ban for the devices such as Casinos and bars.

Google is trying to make it difficult for someone else to use your glasses such that the device stops working if it is not in the possession of its original owner… I think they are too late because the device has already be hacked using commonly available android exploits.  I saw a post recently from Jay Freeman that he has officially bypassed the android security.  He has a great name, but a totally different guy.

Wearable computers isn’t really a new concept but what is new is the streamlined approach that they take.  It seems that very little training is required to use these devices and they are incredible useful.  I think a lot of uses for these devices can be made that are possibly not the real intention of the inventors.  Virginia banned them while driving but I am thinking they could be made to enhance the driving experience.  The computer could, for example, detect nearby dangers or point out when you are driving poorly.  As camera’s get more powerful they could be used to augment your vision and provide a whole new way to view things.  Combine that technology with applications like google goggles (free object recognition software) and you could be walking around and seeing things you would never have noticed before.

Cool stuff.  I would love to have one.  For now I will just duct tape my phone to a stick and strap it to a helmet.

~JCF

 

MySQL Database Maintenance

Lately I have been working a lot with mySQL.  I’m not doing anything like Facebook does with their 9000+ MySQL instances but I am dealing with some fairly steady database load issues that require proper attention when scaling up the underlying website.

The server itself is running on some fairly decent hardware with the main databases using SSD drives for primary storage.  Of course, I don’t fully trust cellular storage yet for databases and a backup plan is in order.  Since this particular database is using the MYISAM engine it drastically affects the options that I have for locking the database.

Performing a mysqldump on this database will effectively lock every table and cause live transactions to queue up while the dump is occurring   On a dB that is 74GB and growing fast I can’t just dump the data and freeze up the system while everybody is using it.  For this reason I decided to do some automated table archiving before doing the dumps.

1. Archive old data using a script (MYSQL w/ PHP)

2. Optimize tables if they need it

3. MySQL Hotcopy the database and flush logs (MySQL w/ bash)

4. Dump the copied database using a script every hour to tier 2 storage

5. Backup the dumps every 4 hours to tier 3 storage/backups
1mbschematic

In this case there is a lot of historical transaction data, statistics and messaging history that is really not referenced much after its time has come and gone.  We still want to keep it but we don’t really need it consuming resources in the main production database.  I decided we should create a logging database and send over the data that was older than some arbitrary time period.  I achieved this by declaring a new database and used the following SQL command for each table:

CREATE TABLE IF NOT EXISTS log_database.stats_table LIKE prod_database.stats_table;

The result is an identical schema copy of the source database. Sweet! Now on to the next, step data movement.  This was achieved with another SQL statement:

SELECT stat_id FROM prod_database.stats_table WHERE stat_added < (NOW() – INTERVAL 1
HOUR) ORDER BY stat_id DESC LIMIT 0 ,1

The result of the above statement will give me the key in that table that is older by about an 1 hour.

So I have my key and I have my target table… now to move the data across; I will do the old heave ho using the REPLACE command.  Its the best choice here because I might end up in a situation where a duplicate key exists in the target database.  In this case I want the same numbering system and I would like to update the data and the REPLACE command is perfect for that; definitely better that SELECT INTO.

I will move my data like this:

REPLACE INTO log_database.stats_table SELECT * FROM prod_database.stats_table WHERE prod_database.stats_table.stat_id < MY_KEY_FROM_ABOVE

DELETE FROM prod_database.stats_table WHERE prod_database.stats_table.stat_id < MY_KEY_FROM_ABOVE

After deleting all that data I should optimize it and clean up the fragmentation.

OPTIMIZE TABLE prod_database.stats_table

I took the above statements and wrapped it all in a PHP script.  I can use a PHP script to run as a webpage or standalone as a command line script.  I used the web server for convenience to graphically see what I was doing while capturing all of the output into a single file that could be displayed (web mode) or dumped to a file (shell mode).

Using the method above I was able to take the database from 74GB down to about 7GB.  Thats a substantial memory savings. Happy with my progress I wrote the database rotation script.  The script locates all of the necessary tools including the credentials, mysql, gzip, mysqldump, grep, cut, rm, date and cat as well as an configuration file containing a list of the databases I want to dump.  The script cleans up any old backups (older than a day) and gets rid of them.  I strongly recommend that you use absolute paths in any delete scripts.  In this case I am recursively deleting folders with date patterns… I took my testing very seriously.

The script was a classic #!/bin/sh shell script and works great.  It gets rid of old copies and dumps each database while flushing the logs on command; it even has an option to detect and skip specified tables.  So I decided to try it on my test system and watched it work nicely.  It takes about 20 seconds to dump the test system but alas the test system is way smaller than production system which has 5GB more data.  Quick math showed my that I was in for a 60 second production outage with the database’s current size… and it is growing.  To be effective my maintenance program needs to run frequently.

In steps mysqlhotcopy which is a perl utility that does a hot backup of a database with minimal downtime.  It doesn’t rely on replication or anything fancy but rather is a replication mechanism.  It will still lock the database but only briefly; long enough to lock the tables and flush the logs.  It makes sense to flush when the hotcopy is complete.  From here I take the static hot copy and use my dump script on it.  Works great and no impact to the users.  I have enough space to run a hot copy until the dB reaches about 90GB and then I should revisit my disks.

Everything is in place and working.  I used cron to run my tasks and got the data center guys to ensure a good backup of the files is done on a frequent schedule.  I record the event and email myself when it is done.

I recommend that everybody does a disaster recovery test after they implement a system like this.  While you are doing the disaster recovery make note of the critical steps and keep them in an accessible place. You never know when disaster will strike next or what shape it will be in but you can rest assured that have ensured minimal data loss and minimal downtime. In my spare time I intend on wrapping these procedures and creating a service to constantly manage, backup and report the database.

For all the critics out there that think I should just use a binary backup; you are correct. I also do have a binary backup but I don’t trust it completely.  It seems to do the job fairly well until it doesn’t… and I have a less than one hour proper backup to rely on.

~JCF

 

Using a Version Repository

For a long time I haven’t really used a code version control repository.  For those that don’t know this is used for storing documents usually in the form of programming code that allows for multiple users to work on a project without stepping on each others toes.

There are a lot of different ways to do this and it really depends on what you are doing.  It seems to me that the majority of developers have switched to using gitHub.  I looked at github and it has all of the usual versioning stuff like branches/forks, tags and multi-user capability.  It seems to be a bit more flexible in how it handles things. One thing that it doesn’t do is plug into dreamweaver.  I’m not sure if Dreamweaver will ever include github but since they have support for subversion I can only imagine that they will have to build in git at some point.

Other versioning technlogy exists as well such as CVS and Team Foundation Server (TFS).  I have used both and I can say things have come a long way since the olden days.  TFS is a very mature Microsoft product best suited for .NET development.  I think the closest thing to TFS in the open source world is git.  I have to say TFS seems to be better suited to the task as it was very reliable whenever I had to use it.

subversion_logo

 

After some investigation I found myself setting up subversion for the project I am working on as the other developers are using Dreamweaver and prefer to not change.  This left me trying to figure out the architecture of the repository which has many other factors to consider such as the number of files and where the programmers are located (same site/remote).  I also had to figure out the access methods that we are going to use.  DW supports http/https/svn+ssh and svn protocol directly.  For ease of use and security the only logical choice was https as the other programmers are remote and we require security.  The other secure option seems like a real pain to configure in windows.  So that all decided I was kind of locked into using apache as the web server as that method, when used with Dreamweaver, requires webdav which sort of dictates that we use mod_dav_svn on apache.

So it seems the IDE decided my fate for me.  Since we use dreamweaver and are located at remote sites we had to use subversion on apache hosted remotely.  We have selected a host, have our SSL certificate and are ready to build.  I have also opted to install Jenkins to do the automatic package builds.

Now all I need to do is set it up.

~JCF

VPS Servers

Our cheap VPS web server accounts are intended for advanced users who need to manage their own private server but are not yet ready to move to a dedicated hosting solution due to the high maintenance costs. VPS servers are based on an innovative virtualization technology, allowing for each VPS user to be completely independent from the others on the same machine. This unique technology brings you generous amounts of RAM memory, CPU and network resources, which you can make use of at any given moment.
covair

Starting at $30.00 per month with lots of options!

More information here: http://www.rtihosting.net/?lang=en&action=vps-hosting

~JCF