Goodbye World, So Sad to See You Go Dennis Ritchie

by dougw.

October 12, 2011 , a sad day for geeks around the world.  The great Dennis Ritchie has died.  For those of you not familiar with Ritchie, he is best known as the creator of the C programming language (a language for which a vast majority of the world’s operating systems were based off of).

During the years of 1969 through 1973 while working at AT&T’s Bell Labs, Ritchie, with some assistance from Ken Thompson,  created the C programming language on a PDP-7 system.  Haven taken into account several suggestions from some of his colleagues (Dennis Ritchie, Brian Kernighan, Douglas McIlroy, Steven Gladstone and Joe Ossanna) at Bell Labs, the new powerful C programming language (which was based off of B, a language that was an enhancement of the then BCPL programming language) would eventually be ported over to the new (at the time) PDP-11 systems.  In 1973 C was widely used in rewriting the UNIX kernel for UNIX TSS 5 (time sharing system).  This was a major achievement in computer technology since at the time it was one of the first OS kernels to avoid using assembly language in it’s  implementation.

In 1978 Ritchie teamed up with Brian Kernighan to write a book that would be regarded by most as the best programming book ever written.  The book was simply entitled “The C Programming Language.”  A thin programming book describing the language in roughly only 270 pages of text.   The book is so popular that it is also referred to as the K&R C book (the first initials of the authors last names).  It’s also worth noting that the AWK scripting language (which has several ties to the development of Larry Wall’s Perl) also followed this same contemporary naming trend as it had been authored by Alfred Aho, Peter Weinberger, and Brian Kernighan.

Had it note been for Ritchie, modern age computer systems would likely have evolved in a drastically different direction.

To read more on Dennis Ritchie please refer to some of the following links:

Dennis M. Ritchie Home Page

Dennis Ritchie Wiki Page

C (programming language) Wiki Page

Unix Operating System

Dennis Ritchie

How to Get Your Public IP Address from the Command Line in Linux, OSX and Other Unix Variants

by dougw.

Sometimes a browser isn’t always available when your working in a POSIX/UNIX environment.  You also can’t always rely on the results of ifconfig, as 9 times out of 10 these days you’re machine may likely be behind a firewall.  You may be wondering why even knowing your public IP is important.  Well a great example is that may be you have a computer at another location that you may need to occasionally access.  Let’s say it’s your home computer which relies on an internet service that uses DHCP (as a lot of them do these days) for it’s IP assignment.   With that being the case it’s not always convenient to jot down your IP address every time you leave your computer.  So with that being the case you would want to setup some sort of cron job (using crontab) that randomly emails you the IP or you could even use SSH to have it transfer that information to an offsite location (like a public server or VPS for example).  That way when you need to ssh back to your home computer to check on something, you’ll always have your IP easily available.  A couple of executable commands that can help you achieve this are the curl command or even using lynx (a text based browser) in batch mode.

So without further adieu, let’s get right to the meat of it.

Since most sites that share your public facing IP are typically restricted to only allow for browsers, one way around this is to trick those sites into thinking your actually using an approved browser.  I previously used to use http://imh-myip.com or http://whatismyip.com to retrieve my public facing IPs via terminal when necessary.  As you’ll notice trying to use curl while accessing those pages will return funky results.  A quicky work around is to invoke the user agent string option in curl.  So simply adding “Chrome” or “Mozilla” or “MSIE” as the parameter to that option should be a tasty work around.  For a full list of different user agent strings please see the following link.  So a working example would appear as follows:

curl -s -A “Chrome” http://imh-myip.com

It’s worth noting the “-s” option as this option puts curl in silent mode to eliminate the connection status/error messages outputted by curl.

If you want to go a step further and grab only the IP, using grep you can extract only characters resembling that of an IP string like so:

grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}'

So our command would use a pipe to join the data stream from the output of the curl statement into the grep command:

curl -s -A "Chrome" http://imh-myip.com | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}'

How to get the latest download links for your instructional pages

by dougw.

So lately I’ve been reading up on about push based webservers (i.e. APE, Node.js) and I was reading an article on installing Node.js.  While reading the article I was curious if 0.4.4 was in fact the latest version (which it was) and in the process of checking the download page, I thought, “gee, wouldn’t it be nice to have these things automatically updated”?  So that got me onto thinking about how this could be done.  So after 2 minutes and a little bit of command line magic with sed, the problem was solved.  Now I know this can be done very easily in Perl or with PHP’s PCRE however, for sake of brevity, simplicity and overall compatibility, I’m going to show you how to do this with a crontab with curl, grep, sed and sort (which with the exception of maybe curl, are all common tools on any Linux/Unix based system).

First off, you need to get your output.  Which I used curl –silent to get.  To seg off for a second, there’s a minor chance that curl *may* not be installed on your System so if you’re using a RPM based system such as RedHat/CentOS/Fedora you can simply run “yum install curl” or “sudo yum install curl”.  For a dpackage based system such as Debian/Ubuntu/Mint  you can use “apt-get install curl”.  If you’re using Arch Linux or Gentoo I’ll just assume you know what you’re doing already (but for those that don’t pacman and emerge are their respective managing systems). Last but not least if your using OSX or some other flavor of BSD based system you can use the ports (which will need to be installed separately on a Mac–which could also use homebrew or fink) system and enter “port install curl”.  Anyhow… to get the output I used curl –silent http://nodejs.org/dist/ .  This gives me a nice dump of the page contents (including the html) which looks a little bit (minus some truncation on my part) like the following:

Now that you can see the output you’re working with it’s time to use a shell data pipe to grep to grab out the content we want. So the command is going to now reflect the following:

curl --silent http://nodejs.org/dist/ | grep node-v

This will give you only the lines you’ll need to work with for determining the latest version.  Now on to the magic of using sed, sortcurl –silent http://nodejs.org/dist/ | grep node-v | sort -nk1 | tail -1 and tail.  Now if you just need to grab the text you will definitely need to use sed, otherwise you can get away with just a numerical sort and a shell variable to store the page URL.  Assuming you’re just grabbing the version and require sed, you can use the following:

curl --silent http://nodejs.org/dist/ | grep node-v | sed 's@.*">\(node-v.*\)</a\>.*@\1@g' | sort -nk1 | tail -1

The use of sed will effectively discard all the HTML tags and will only spit out the file names.  So once you use a numerical sort (which is denoted by the “-n” in the sort command’s arguments), it will sort all the output by each characters’ alphanumerical integer reference lowest to greatest (you could also use the reverse flag and pipe to head too).  The last bit of the piped command uses the tail binary to with the argument -1 to denote that you want to only grab the last line (it’s important to recognize that the minus is not actually a negative sign but rather a switch to denote an argument for the tail executable) of the stream.

Now that I’ve shown you the sed way to grab the  line.  Lets get a bit more crazy and actually output an HTML formatted line with the magic of some shell variables (this will also make our little one-liner a lot more fun and modular):

SITE="http://nodejs.org/dist/"  # defining site URL
LATESTLINK=$(curl --silent $SITE | grep node-v | sort -nk1 | tail -1)  # Latest version line including HTML output
OUTPUT=$(echo -n $LATESTLINK | sed 's_href\="_&'$SITE'_') # HTML output reflecting site URL
FINALLINK=$(echo -n $OUTPUT | sed 's#^\(<a.*</a>\).*#\1#') # using sed to strip out excess none-link  material

And there you have it, the FINALLINK variable will now reflect the HTML anchor for the link to the latest version of Node.js (provided the formatting of the HTML page remains the same).

To incorporate this into a website you can merely make it a shell script.  So fire up vi or nano and prepend:

#!/bin/bash

..to the top line and then append the following line to the end:

echo $FINALLINK

Once you have done that and saved the file (lets call it latestnode_ver.sh and save it in /home/doug/bin/) you then must make it executable so run the following:

chmod +x ~/bin/latestnode_ver.sh

Now you can get the latest version using that command (assuming ~/bin is within your default PATH variable).

You can then create a cron job to modify your HTML file regularly (using crontab -e).

Hello world!

by dougw. 0 Comments

The time for procrastination has met it’s bloody demise. The site is now up and functional.   All the super-fun technologies I’m used to working on a regular basis have all been installed and setup.  Nano (yes, nano, vi should’ve been left back in the ’80s along with massive programmer beards and frighteningly large framed glasses)  has been configured to include the fun features and syntax highlighting I’m grown so acustom to.  MySQL, Perl, PHP, Python, and Ruby are alive and kicking.  Puppet is rearing it’s ugly head to insure administrative laziness is running at full capacity.

Yes, sadly, this is Word Press.   As mentioned above there’s a certain level of laziness in getting a site off the ground and with all the other fun configurations I had to setup from scratch (see above).  Primarily this site is here to serve as my knowledge dump so I don’t lose track of all the information rattling around in my head.  Sadly this happened with my C++ and JavaScript addictions from earlier in the millenia.  The general idea is not to let that happen again.  And hopefully the few people that come across this site will enjoy what they read and maybe consider contributing on other subject matter.