Monday, February 28, 2011

A Short RVM setup sheet



RVM installation for FreeBSD


There are a few good steps outlined on the RVM website. This document is mostly echoed from those steps: RVM install

The author (Wayne Seguin) offers probably the best tech support available. He is on irc.freenode.net, but you'll have to authenticate in order to post. The best route is to Register on freenode , setup a username, and login to the #rvm channel with authentication. The web interface is pretty nice.


/msg nickerv identify <your password>


Basic Installation


Most of these steps should NOT be run as root. Some of the dependencies will need to be. Install git, bash and curlWhen these are installed, run bash and
and update your .bashrc with this line:


[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm"


Then:
   chmod 600 .bashrc


and edit .profile, adding the line as the last in the file:
   source ~/.bashrc


Log out of the console, and log back in to make sure everything works.
Now fetch rvm: 


$ bash < <(curl -s https://rvm.beginrescueend.com/install/rvm)


Logout and log back in again. To test the rvm installation, it should say "rvm is a function" when the following is typed: 


$ type rvm | head -1




Updating the install


You should probably run these 2 commands each time you login to your shell, but do it now to make sure you have the newest version:


$rvm get head
$rvm reload


Getting Dependencies and Ruby


You'll have to check the notes for your platform: 
$rvm notes 


These notes are very important. You'll have to install a bunch of dependencies as root. For Ubuntu, for example it will give a list of packages to install: 


ruby: /usr/bin/apt-get install build-essential bison openssl libreadline6 libreadline6-dev curl git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-0 libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev ncurses-dev


To find out which Rubies you can install, type:
    $ rvm list known


Now for the ruby installation (NOT done as root). Just choose the version, and run the following:  


$ rvm install 1.9.2  where the ruby version is 1.9.2. 




This will take some time to build, since it's being built from source. Also, there will be no screen messages during the build process. When completed, both ruby and rubygems will be installed somewhere under your .rvm directory.


Setting default Ruby & Gems


If this is the first installation, you won't have any ruby or gems defined by default. So, to set your default ruby to ruby-1.9.2-p180 enter:


$ rvm --default use ruby-1.9.2-p180
$ rvm list


The above lines will display which version is the current default. You can further verify with:


$ruby --version
$gem --version


Next, set your default gemset to something bound to the purpose (say rails3)


$ rvm --create use 1.9.2@rails3

This tells rvm to use ruby1.9.2 paired with the gemset called rails3. What happens is that rvm creates a separate set of gems to use with rails-1.9.2. If you should ever need to use another ruby and another version of gems, you can have multiple combinations hanging around, and simply tell rvm which one you would like to use. To verify the current gemset, type: 

$ rvm gemset list

You should see an entry for "global" and your special gemset: rails3. At this point, you should update your gems to the latest. This is done by:


$ gem --version
$ gem update --system
$ gem --version


You should see the version jump to the latest (currently 1.8.5.) There is some debate on whether any of these commands should be run as root, but I would not recommend it if everything appears to work. There is no sense giving root access to programs that do not need it, especially on web servers.


Installing rails (optional)


That should be it for the ruby installation. You can run ruby normally as the current user without worries. RVM basically intercepts calls to ruby and its utilities, and allows you to have everything installed in your local directory, as well as multiple versions of Ruby and Gems, which are kept as "Gemsets". 


Before installing Ruby on Rails, it's worth visiting the Rails website to determine the correct version. At http://rubyonrails.org there are three available, the release candidate (unstable), the dot-zero version (too old) and the bugfix version. In this case, the bugfix version is 3.0.7 so we will install that:


$ gem install rails --version 3.0.7


All dependencies should binplace locally (in the .rvm subdirectory) and without 
the need for root access. You should be able to see what's installed with:


$ gem list


If any troubles occur, or there is a need for build-time modifications, you can always poke around in the sources in the .rvm directory to customize anything you install.


Maintenance


$rvm get head
$rvm reload

Thursday, February 24, 2011

The misery of Nth Party Packaging Systems

<mood - very annoyed>

I have a beef. It's about the compatibility between various Nth tier packaging systems on any given platform.
When you build a software package, it should just work. A software package to be installed on FreeBSD should not be revealing the developer's personal environment settings, (like trying to look for files on the dev's Apple laptop paths, and spitting out error messages about it.)

This is the strange case of the ruby gem called ZMQ, which is the Ruby interface to the ZeroMQ libraries. It essentially makes ZMQ usable under Ruby.

This is 5th party software.  Third-party software is managed under FreeBSD by the ports collection. The ZMQ libraries install just fine. But you're better off using RVM (Ruby Version Manager) to manage your ivarious installations of Ruby on your system. RVM manages 4th-party software. This is because it one level below a third-party packaging system. The GEM packaging system (Ruby's module installation system) is 5th party software, because it runs under RVM.

FreeBSD->Ports->RVM->GEM
                    3rd      4th        5th

I don't know what second-party software is.

Basically, we have too many packaging systems here on a single platform. They all work fine if you don't try to get them to work together. But everything falls apart when you expect them to interoperate without problems. The GEM installer under RVM under FreeBSD simply cannot figure out where to find the libraries it needs to build GEMS from scratch. It is completely unaware of the location that FreeBSD normally installs libraries, (/usr/local) and so you have to dig through a zillion layers of logfiles and directories to figure out which Ruby module failed, and why, and how to make it work.

This takes away from quality programming time. Seriously. Hours.

What would fix this is some kind of  SAAS RESTful server at FreeBSD that the GEM installer could query, based on the output of uname -a, the gem that's being installed and what package it is looking for. The RESTful server could respond with package meta-data giving installation locations of dependent software for this version of FreeBSD. This would be a cool idea, because people who write GEMs probably don't know the standard locations of installation and library paths for all the possible operating systems out there. Nor do they know which subsystem their package is being installed under.

Such meta-data servers could provide information for literally any OS, including Windows and Linux. Each of these operating systems have unique locations for software installations.

The goal being to rely on standard installation locations for any given platform so you can give people the "IJW" experience (It Just Works - a term coined at Microsoft) when installing stuff.

Wednesday, February 23, 2011

The value of taking breaks

One of the things I keep re-discovering when working on little software projects, is the value of thinking about stuff away from the computer.

Today, for example. I spent two hours mucking around @thecoffeeshop, debugging a Javascript file on the CR-48. Mostly problem discovery mixed-in with trying to come up with an answer with code. It was confounding, because each of my attempts to fix would not work. I was also fixing little things along the way, and many of my ancillary thoughts were related to the act of coding and using the computer.

It started to snow, and got very cold, so I gave up and walked back home.

During that time, about 20 minutes, I thought lightly about the problem (solution analysis) along with stuff not related to the programming issue, such as my schedule for the rest of the day, the weather, etc. When I got back home, I setup my dev enviroment, and basically fixed the problem in 5 minutes.

What does all this say?  I think a few things can be learned from my little 'case study':

Fixing little problems is mostly what development is about. Particularly informal, incremental development - but this just as easily could apply to planned development. There are always going to be technical obstacles in implementing, and your challenge is to overcome each one and close-in on a finished product.

For each of these little obstacles there are little phases you take to solve them.

1. Research - find out what you don't know. This can take a little while sometimes. For this particular problem, I had to do some research on bit-mapping and codepages the night before. Little bits of test code were written as proof-of-concept. The takeaway was not really a solution, but a realization of that I needed to  do more work, to satisfy certain dependencies in order to find a solution. Frustrating and messy.

2. Problem Discovery - Actually examining and testing existing code to discover what it wasn't doing. Frankly this was also messy. When I got tired of trying to analyze what was going on, I ended up doing code-housekeeping as a 'break' only to change things enough to the point I was not certain the environment was the same. I ended this phase feeling annoyed with myself.

3. Solution Analysis - Interestingly enough, away from the computer. I couldn't just impulsively follow hunches, doing research, testing code snippets or cleaning-up existing work. As a diversion my thoughts were interleaved with other totally unrelated matters for about 20 minutes. The takeaway from this was a feeling of being less frustrated, tangled. I also had a vague suspicion about what I had been doing wrong. Thoughts were bubbling-up naturally rather than being a kind of forced slavery that I often put my mind through when trying to solve something.

4. Trial Implementation - About 5 minutes to a solution (or at least substantial progress toward one.)  I did do minor layout housekeeping first to clarify the toilet-paper roll of code. I was also using better equipment and ate lunch while I was working. I rolled back to a previous version I knew was better quality, and resumed work from there.

I don't regard myself as a very good programmer. I have struggled with this fact for awhile. I take heart in criticism of co-workers who have said in the past I tend to make things too complicated. They are correct. I also take heart in the fact that almost every project I worked on was not entirely my work. Contributions of other people are ever-present. It helps to remember these humbling facts, because being open to a little self-criticism probably makes you more effective at what you do.

Thursday, February 17, 2011

Basic Javascript Call Types

Some of the hardest-to-understand things about Javascript are also some of the most basic concepts to any programming language. Take simple function declarations. Just input-output, parameters and an algorithm, right? Well, not quite. It's actually a bit more complicated.

Javascript is a dual-style language. You can program it procedurally, like C, or in an object-oriented manner, somewhat like Ruby or Perl. The basic function/class declaration takes the form:

function Name (parameter_list) { code_block }

The form above is really just syntactic sugar for function pointer assignment syntax:

var Name = function (parameter_list) { code_block }

Name is an alias for an anonymous function pointer. Furthermore, every object instance in Javascript is more or less equivalent to assignment of a simple anonymous hash table:

var Name = { code_block }

Above, we create an object called Name. There is no prototype (ancestral shadow object) attached to Name, as you would normally get if you used new() (which you can't with a hash), but member methods and properties assigned inline or posthumously will function just as if they were part of an object.

Similarly, a function declaration also doubles as a class declaration in Javascript:

// namespace collision wrapper START
(function () {

/* this is our dual class/function */

function Klass(param) 
   {
   this.inline_property = "I'm an inline_property.";
   this.inline_method   = function () 
      { 
      return "I'm an inline_method."; 
      }
   return "I'm a return value.";
   };

/*********************************/
/* add some members posthumously */
/*********************************/

/* Add static members to Klass */

Klass.static_property = "I'm a static_property."; 

Klass.static_method   = function ()
   { 
   return "I'm a static_method."; 
   } 

/* Add instance members to all objects created from Klass */

Klass.prototype.instance_property = "I'm an instance_property."; 

Klass.prototype.instance_method = function foo()
   { 
   return "I'm an instance_method."; 
   }

/**********************************************************/
/* Now let's see the different types of calls we can make */
/**********************************************************/

var d = document;

/* call as a plain old function */
d.write ("[Call as a function]<p>");

// prints "I'm a return value."
d.write ( Klass("parameter") + "<p>");

/* call as a static Class */
d.write ("[Call as a static class]<p>");

// prints "I'm a static_property."
d.write ( Klass.static_property + "<p>");

// prints "I'm a static_method."
d.write ( Klass.static_method() + "<p>");

/* call as an instance of Klass (an object) */
d.write ("[Call as an Object]<p>");

K = new Klass();

// prints "I'm an inline_property"
d.write ( K.inline_property + "<p>");

// prints "I'm an inline_method"
d.write ( K.inline_method() + "<p>");

// prints "I'm an instance_property"
d.write ( K.instance_property + "<p>");

// prints "I'm an instance_method"
d.write ( K.instance_method() + "<p>");

}) (); // namespace collision wrapper END

The point of the code above is to show that a function declaration can double as a class declaration. If the function Name (or Class Name) is new()-ed, it has nearly the full capability of an object. So not only is an object mostly interchangeable with a hash, but a class is mostly interchangeable with a function.

Class <=> Function and Object <=> Hash


How's that for confusing? Little old Javascript is not as simple as it seems!

Monday, February 14, 2011

Some first impressions of Javascript

In the past couple of weeks I've been learning Javascript. I guess I put it off for so long because I thought it would be simple to pick-up and learn. Unfortunately, I've discovered that's not the case. The complexity (for want of a more suitable term) is definitely an encumbrance to learning.

After two weeks of reading and writing, I view Javascript as a comparatively crude procedural language with an overly-liberal free-form structure that relies heavily on programmer technique and understanding of internals to code solidly. For the beginner, the language seems quite confusing, but it's possible to write stuff.

It seems there is no 'correct' approach to author good code. Should you write Javascript in a defensive, yet contrived, object-oriented manner or a direct, simple functional manner? Is the goal to write small code or to write protected code? There is plenty of documentation out there that will advocate both approaches. Unlike C and C++ there is no single, revered, authoritative source of information.

Javascript seems to be a language that has evolved to keep pace with changing standards in HTML and associated technologies. It has a big job to do. It integrates fully with and can substitute for many functions of CSS and HTML, of which there are many versions.

Javascript has internal libraries to support a bunch of things to do with browser content, control and interaction, and many external libraries such as JQuery, Prototype, Dojo, YUI and others that will do anything from compensating for deficiencies in Javascript to offering complex widgets, like popout calendars, forms and animations.

In fact, when all the hens come home to roost the amount of information to learn is mind-boggling, disorganized - and complicated. To further steepen the learning curve, programmers must consider issues related to performance, browser compatibility, event handling, security, scoping, user and programmer friendliness.

There are not many languages that compare to Javascript. Simply the fact that the interpreter is a specialized language hosted by a web browser host makes it unique. But Javascript carries some noticeable similarities to other scripting languages like Perl and Ruby.

Like Perl, it is not really object oriented, but can be carefully used in such a way as to mimic object-oriented features to provide some form of inheritance, member access protection, and possibly classes and class members. I don't think polymorphism enters the picture since there are not really any meaningful types. Like Ruby, it's object-oriented model is soft, mostly typeless, and you can mutate "objects" with new members easily after instantiating them. The idea of hidden prototypes is similar to Perl's symbol table and Ruby's "shadow classes". Execution context and reliance on closures are other things which bear similarity to Ruby and Perl - but closure oriented programming is hard to understand and diagnose.

The whole "a function is an object is a hash" thing perhaps has gone too far when you discover Javascript arrays are quite inefficient, being hash tables (or at least associative arrays) themselves. And there are no integers. Or something called the scope chain (still learning about that) can easily kill off performance if you aren't aware of its behind the scenes behaviors.

But in the end, the language itself is double-edged, with features that are simultaneously beneficial and detrimental. I suspect it is a language of trade-offs.

I should mention the supporting toolsets, especially debuggers, are fairly low-quality when compared to C and C++ counterparts, although the profilers are pretty good. It depends on the browser. Probably the best debugger I've found is the one included with Chrome - but it goes off into the weeds, has a few questionable behaviors, and isn't comprehensive.

It's a little sad to think that a language so muddled is the pillar of the future -- the future being cloud applications. Yet, this is kind of the regressed world we live in, muddled and uncertain.

Saturday, February 12, 2011

Cloud-based IDEs in the Perfect World

One of the complaints I see on mailing lists for the Google CR-48 laptop is the lack of development tools, particularly the absence of Java. As a developer it can be a bit frustrating to have your favorite portable computing machine bolted into a scantly-equipped programming environment.

But the CR-48 is designed to embrace the concept of cloud computing. This means in a Perfect Universe, all tools you would normally expect to find on your typical desktop will have been moved to the internet as web-based software. In this Universe, you no longer have to install, maintain, synchronize or otherwise deal with software installed on your laptop, desktop or work computer. This will all taken care-of at various data centers in the cloud (AKA the internet.)

So if you want to open Notepad, you will just point your web browser to the notepad application on the web. If you want to compile some C++ with the GNU compiler suite, you'll simply point your browser to an AJAX application which will give you a command prompt. If you want to debug a C# application, you will just goto Microsoft's website where they will have your personal copy of Visual Studio online, complete with all your projects. Everything in the Perfect Universe will happen through your browser, so there is no need to store anything on your computer. These earthly things will be left behind.

But in the Not-So-Perfect-Universe none of these web applications actually exist yet. And we should be mindful of the fact that these type of applications are incredibly complex and expensive to build - much more so than the original desktop applications they will be designed to replace. The technologies needed to elevate us to the Perfect Universe are fragmented, distributed and kludgey.

This is not to say it won't all one day come true, but it aint gonna happen in 2011.

So for a code geek using a cloud-based appliance like the CR-48, the frustration is understandable. The CR-48 in user mode grants no access to the underlying file and operating system. So you can't install all those juicy developer tools and languages.

(You can go into developer mode on the CR-48 and install Linux, but it's not playing fair by the cloud, so we will ignore that option in this discussion)

If you were thinking 'Java applets, maybe I can do those', I am sorry but Chrome OS doesn't speak Java. Not even in the browser. However, it does fluently speak Javascript if that's any concession. Not quite the Cockney of computer languages (like Perl) but more like pigeon english. The compensating factor is the Chrome browser Javascript engine - V8 - which is essentially Tornado in a can. It speaks Javascript very fast. If you live in the cloud you'd be crazy not to use it, whatever harp or computer you choose to play.

Given this future, it's amusing to think the Javascript language is going to be critical to the success of living in the Perfect Universe. The language is so limited it begs the black arts of the high coding priests whose bag of tricks will make it good again. Apparently, the services of this elite priesthood will matter, because research has shown that even 500ms web application-induced latency will drive away 20% of your online customers.

So in this Ominous Perfect World, if you are in a pinch and must use your web browser for all things, here is a short list of browser-based development tools that look promising:

Jgate Free IDE + hosting for Javascript & Java apps.

ShiftEdit an IDE for several languages.

Good luck.

Friday, February 11, 2011

Win7 vs. Ubuntu vs. CR-48 boot times

I just completed a little informal testing of boot times between Win7 and Ubuntu 10.10. Just for fun I included times for the same operations on the Google CR-48 netbook laptop running Ubuntu.

Amazingly, between Ubuntus, the desktop machine with somewhere near twice the processing power was at least 50% slower on booting than the Google CR-48 laptop. And Win7 was approximately 50% slower to achieving a state of usability than Ubuntu on the same desktop machine.

Why is the CR-48 so fast? Well, the BIOS boots about 4x faster. Plus Ubuntu on the CR-48 has no swap file to contend with, and therefore no disk thrashing (even if you could thrash a drive with no moving parts.)

However, comparing Ubuntu and Win7 desktop boot performances was a bit more subtle. Superficially, each OS had fairly close stage booting times to each other, with main exception being disk spin-down after a stage was reached. Ubuntu had much less disk spin-down than Win7, and thus became usable much sooner. I think this is mainly due to the way Win7 loads its drivers and prepares its files.

Disk spin-down was measured by when the drive light stopped being solid. From a practical perspective the OS is basically unusable until disk activity allows for some breathing space. Performance can even be made worse if applications are launched before the disk stops intensely thrashing.

Informal boot time comparisons
Stage Win7 on Intel Desktop Ubuntu 10.10 on Intel Desktop Ubuntu 10.10 on Google CR-48 laptop
Power on to Boot Select 20s 20s 5s
Boot Select to Login Screen 27s + 23s spindown 27s + 1s spindown 15s + 0s spindown
Login Screen to Desktop 10s + 30s spindown 18s + 1s spindown 1s + 0s spindown
Desktop to Power Off Shutdown 20s-35s 5s 2s

To be fair, the Windows machine had more software to load that would not be normally found on a Linux machine. Although both machines have the same commercial video software drivers, the Windows OS has virus protection software and a few other things to load before becoming usable. But this is still a design issue. Linux simply doesn't thrash the disk by design.

Some extra technical details:

Both machines are 64-bit dual-core Intel architectures with 2G RAM. All operating systems are 32-bit software. All machines have similar usage wear and tear, and a moderate amount of software installed on them. The big difference in the hardware is that the CR-48 is somewhere between a net-book and note-book, and the desktop is a typical desktop with a 0.5 Tb mechanical hard disk, mid-range video card and 2.6 Ghz processing power.

Thursday, February 10, 2011

Upgrading Ruby on FreeBSD

The Ruby language has fairly frequent bugfixes and minor release versions. Keeping up with this can be challenging, but with RVM (Ruby Version Manager) it's pretty much a snap.

Today I upgraded my Ruby installation on FreeBSD from 1.9.2-p0 to 1.9.2-p136, with a pretty substantial number of fixes added from one version to the other.

On FreeBSD, you should avoid using the Ruby ports altogether, and let the wonderful little RVM shell script manage your installation. It will save hours of messing around with the Ruby and Gems ports, which don't play well with Rails :-(

RVM has a real talent for managing multiple installed Ruby/Gems installations - all in a hidden .rvm directory in your home directory. It requires no root privileges to use. It also updates itself with ease and appears to have a very dedicated author. If you don't already have RVM installed, you can get it from http://rvm.beginrescueend.com/rvm/install/

One might be doing ones-self a huge favor also by following Michael Hartl's simple instructions for bootstrapping Ruby. He too is fastidious about maintaining his instructions, which are kept fresh and current:


In any case, upgrading Ruby is a fairly painless process:

1. The first step is to upgrade RVM itself with: $> rvm upgrade

2. Next, reload the RVM shell script: $> rvm reload

3. Then, tell it to install the new Ruby version: $> rvm install 1.9.2-p136
(this will download, compile and install from source, so wait a while)

4. Running $> rvm list will give you a list of all the Ruby installations RVM is managing, with an arrow pointing to the default.

5. Now just tell RVM which Ruby to use as default $> rvm use ruby-1.9.2-p136 and check again with rvm list to verify the new default Ruby installation. You can also use $> ruby --version to double-check.

Wednesday, February 9, 2011

Just started this blog. Hopefully I'll have enough time to fill it out!