Saturday, December 3, 2011

Recovering ChromeOS with a small flash drive on FreeBSD

If you've got a Chromebook and have ever made a typing error in DEV mode while switching over to a Linux installation, you've probably know what it's like to put your machine into an unusable state. You will need to resort to using a recovery image, and the instructions are right here for that.

Google has a few recovery utilities on various platforms to build a bootable image for a broken Chromebook: The utilities are runnable on Linux, Windows and Apple machines. However, if your Chomebook crashed and you don't have a 4GB flash drive for use with the Window utility, and only have a FreeBSD machine, there is still a way to recover.

Truth be told, the actual stable channel recovery image is only 97+ MB of data, so you really don't need a 4GB flash stick to recover your machine. Unfortunately, under Windows, the recovery software requires you have a 4 GB stick or SD card. But all you really need is a 1G stick or SD card.

If you're using FreeBSD, the Linux version of the software will let you download an image, but it wants to use process file system which has been phased-out. You can still install a Linux compatible process file system for the sake of the script, but it will not recognize anything plugged into the USB ports for some reason.

The best thing under these circumstances is to make a bootable device yourself, and this means simply downloading the latest stable channel image from here. If you are on FreeBSD and use wget, it will complain about an unverifiable SSL certificate, so you must tell it to ignore that:

# wget --no-check-certificate https://dl.google.com/dl/chromeos/recovery/cr48-stable-recovery

Unzip the image, and plug in your flash stick:

# unzip cr48-stable-recovery

It shouldn't matter what's on the stick as long as you've backed it up, because the next command will wipe it out entirely and drop the image raw on whatever used to be there:

# dd if=chromeos_0.15.1011.118_x86-mario_recovery_stable-channel_mp-v2.bin of=/dev/da0 bs=4194304 conv=sync

# sync


The command above was stolen from Google's Linux recovery script which fails to recognize a flash drive under FreeBSD. The huge blocksize greatly reduces the transfer time, and sync seals the deal when it's done. When you plug the stick into the CR-48, it will go through some tests, print some graphics to the screen, and may tell you the recovery failed. However, if you pull out the stick from the USB port, it will reboot into Chrome just fine :)

A Neat Trick

The Linuxcommando Blog showed a neat inter-process commandline trick that really shows the versatility of the UNIX terminal. It involves prodding a process running in one terminal with a signal from a shell prompt in another, making the process in the first terminal tell something about what it's doing.

The specific scenario has to do with dd, the UNIX tape drive copying program. It essentially gives no status while it's running, which for large file copies can be annoying, especially if things are going rather slow. The deal is, once you fire off dd on a long copy, it doesn't say anything until it's done. Below is a command to copy a 900+ MB recovery image to a flash drive:

# dd if=chromeos_0.15.1011.118_x86-mario_recovery_stable-channel_mp-v2.bin of=/dev/da0

Due to the slow transfer speed of flash drives, and the large size of the file, this can take 20 minutes and dd will (dutifully) not report anything during that period. However, (on FreeBSD) if you open another terminal and discover the process ID of the dd process, you can prod it into telling you what its progress is:

# sudo kill -SIGINFO 18940

Every time you run this, the process running in the first terminal will spit out something like this on stderr, but it won't stop running:

71055360 bytes transferred in 262.988274 secs (270185 bytes/sec)
346343+0 records in
346342+0 records out


Friday, May 20, 2011

The Merit of Software Experiments

To contrast the ideas presented in my last post [On Doing Things the Right Way] I felt it necessary to say something about the other type of software development project - known as "experiments". About half all the projects I worked on were experimental in nature.

I personally have worked on several experiments: Emulators, Data spike detectors, Performance and load testing utilities, Object-oriented version control, Parallelized software build systems, etc. All of these are not in current use, but some have been re-written and deployed as "real" software projects by others.

Experimental software projects are kind of an enigma. They are less planned and more ad-hoc. They are essentially license for the programmer to operate with impunity in designing and steering the behavior of the program. From stakeholders, an experiment rarely defines more than a few requirements. The lead developer determines most aspects of the design.

Conceptually, experiments have business value, but the need they fill is often only secondary and non-critical in nature. The experiment is almost alsways disposable, and lacks commitment from developers and management to succeed beyond its first incarnation.

The failure rate of experiments is extremely high. On the first iteration, with a single developer, they are almost always guaranteed to fail as deployable entities. Depending on their degree of criticality to the business, they may or may not be re-written and permanently deployed, but rarely by the same people who wrote them initially.

In that way, experiments are not real software projects, but more loosely-defined forays into the unknown. Their hidden purpose is to discover facts and measure degree of desirability of a service or program. Unlike prototypes, they are not marketing props but tools of investigation. There is always something to be learned from their construction about business process, technical needs, and obstacles that might be encountered in the dark.

An unsuccessful experiment is one where the business did not gain much for the time and effort placed into it. Little or no advantage or knowledge was gained compared to the cost.

It can be hard to distingush between an unsuccesful experiment and an unsuccessful 'real' software project. The main disctinction is that experiments are acknowledged as experiments from the beginning, even if they are deployed for a time. Real software projects have enough organizational support and commitment to drive things to production. Experiments generally lack that organizational backing.

A successful experiment is one in which time and effort in development gave insight into business needs, helped to define requirements or new architectural approaches for future projects, a kind of initial marker-stone to navigate future development efforts.

Successful experiments can become successful software projects if the organization commits to re-developing and maintaining the software. Re-usability of a successful experiment is not the code, but the ideas behind the code which are adopted by the company and built-upon to gain a product advantage.

On doing things the Right Way

An old school friend of mine recently approached me asking for some programming help on an institutional project he was working on. After a few e-mails, I had to come to terms with the fact that I could not help him with the thing he wanted the most - someone to program a web interface front-end to a database he created.

Well, specifically it's not that I couldn't help, just that I have a thing about promising stuff I am not sure I can deliver. The old newspaper editing adage: "when in doubt, leave it out" applies to the list of things you can or cannot do to satisfy requests from potential customers. In other words, if someone asks you to do something, and you are not fully confident you can do it, don't agree to doing it.

This is different if you are writing code for your own project. You would obviously attempt to use unfamiliar technologies, take a lot of risks, do a lot of hacking because you could forgive yourself for failure. Promises to yourself can be broken without consequence.

But when pressed by a customer: "can you do this for me?" and "Not sure, but I can try" (aka "maybe") is not a firm enough ("yes" or "no" type) answer, then you must say "no." The reason is due to the potential for failure, and the possibility of having to deal with the accusation that you promised something you could not deliver.

That being said, another issue crops up, and it's about who manages the process of development. If the customer is in the driver's seat about what he wants, that's fine, as long as he understands and accepts some basic prerequisites of development. The stakeholder can sometimes be the primary domain expert, but not a development expert. If this is the case, he must be educated on the basics before anything is agreed-to.

The basics? Software is a planned, deliberate thing. You intend to produce something. You develop plans. You write code, test against the plans and deploy. There is no ambiguity. The intent is to build a deployable, functioning item that people can use.

Developing software among two or more consenting adults is an act of intent, one has no choice but to consider the state of the deployed article at the time of conception. If you want people to like what you do, you have to plan for it.

This means, before any code is written:

a.) The developer must understand the conceptual domain of the customer.

b.) the developer must build a Requirements Document with the customer.

c.) The developer must build a detailed Functional Specification describing the behavior of all parts of the product.

Even if no code ever gets written, and the project is called off at this point, at least you did not waste money building something you never wanted or were never able to build in the first place.

Even if the current developer is not the person who will ultimately write the code, at least you have working plans to hand to the next developer, which is a huge advantage.

Even if your plans change, you only have to change documents, and will not have to change code too.

If the customer can agree to these things, then the project can move forward. If not, or if the customer waffles, the project cannot move forward.

For example, the first thing a customer might do is trivialize the work needed to complete the project: "All I want is something that just does this or that". If the project is sufficiently trivialized, there would be no need to waste time with engineering pre-requisites, such as planning, management, documentation and expense. Code could just be "whipped-up" without a care.

Often times, this comes in the form of a half-baked request for a complete, but easy-to-make product - a kind of mythical animal that is half-lion, half-mouse. Often the request can be described in terms of contradiction: that of a "working prototype", which doesn't exist in the software business.

Prototypes are to programs as mannequins are to robots. Prototypes are not real programs. They may resemble programs, but are only designed to emulate or "fake" some kind of behavior of the real thing. Prototypes are for display purposes only. Prototypes never get retro-fitted with code and developed into functioning, deployed software.

In the software world, the difference between developing a prototype and a developing working product is one of intent. You either intend to develop a prototype or you intend to develop a software product.

Often, people who do not know what they are doing will attempt to save money by canoodling developers into building a "working prototype", promising it will only be used for the purpose of proof-of-concept. Then, at a later date, they decide to renege on that promise, and treat the prototype as a working product, telling developers to "tweak it" for production.

This leads to disaster. Economic loss, credibility loss and a lot of wasted time. This is the Working Prototype Trap. It looks like a good idea, and with a few misconceived plans, it becomes a real nightmare.

The problem is, some customers don't know that we just don't want to go there. They need to be educated, and have their expectations controlled.

This is not to say that there isn't someone out there who has the talent, experience and enterprise to "whip-up" a quick website that satisfies all the requirements. In fact, if the project is small enough and the expertise of the implementer is great enough, and the knowledge of the domain space is easy enough to acquire, it can be done. It's just a question of locating the right person.

But often this person cannot be easily recruited, and he or she must be able to take the project to the next level since no documentation exists to support other less optimal participants.

Sunday, March 6, 2011

When life hands you lemons...

If you're unemployed, and want to know how many days it's been, how many years experience you have, and with which companies, this Ruby class will tell you. Basically, initialize the WorkHistoryAnalysis class, pass in each experience as "<name>, [start date], [end date]" in YYYY,MM,DD format as shown at the bottom of the file. A little report will be generated at the console.



How to embed code snippets in blogger

Blogger.com (this blogging service) is terrible for embedding code.  Everything gets mangled, double-spaced and mis-formatted. Just terrible. But there is an answer to code snippet embedding: Gist.

Gist is a free service run by Github, the open-source project hosting service. Gist allows you to post code snippets share them via a unique URL, and even allows for comments. Just go to The Gist Website, paste in your code, give it a title and description, then click "embed" and copy the link. You can paste the link into your blog, and (using javascript) it will embed an un-mangled code snippet, just like this:


Monday, February 28, 2011

A Short RVM setup sheet



RVM installation for FreeBSD


There are a few good steps outlined on the RVM website. This document is mostly echoed from those steps: RVM install

The author (Wayne Seguin) offers probably the best tech support available. He is on irc.freenode.net, but you'll have to authenticate in order to post. The best route is to Register on freenode , setup a username, and login to the #rvm channel with authentication. The web interface is pretty nice.


/msg nickerv identify <your password>


Basic Installation


Most of these steps should NOT be run as root. Some of the dependencies will need to be. Install git, bash and curlWhen these are installed, run bash and
and update your .bashrc with this line:


[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm"


Then:
   chmod 600 .bashrc


and edit .profile, adding the line as the last in the file:
   source ~/.bashrc


Log out of the console, and log back in to make sure everything works.
Now fetch rvm: 


$ bash < <(curl -s https://rvm.beginrescueend.com/install/rvm)


Logout and log back in again. To test the rvm installation, it should say "rvm is a function" when the following is typed: 


$ type rvm | head -1




Updating the install


You should probably run these 2 commands each time you login to your shell, but do it now to make sure you have the newest version:


$rvm get head
$rvm reload


Getting Dependencies and Ruby


You'll have to check the notes for your platform: 
$rvm notes 


These notes are very important. You'll have to install a bunch of dependencies as root. For Ubuntu, for example it will give a list of packages to install: 


ruby: /usr/bin/apt-get install build-essential bison openssl libreadline6 libreadline6-dev curl git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-0 libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev ncurses-dev


To find out which Rubies you can install, type:
    $ rvm list known


Now for the ruby installation (NOT done as root). Just choose the version, and run the following:  


$ rvm install 1.9.2  where the ruby version is 1.9.2. 




This will take some time to build, since it's being built from source. Also, there will be no screen messages during the build process. When completed, both ruby and rubygems will be installed somewhere under your .rvm directory.


Setting default Ruby & Gems


If this is the first installation, you won't have any ruby or gems defined by default. So, to set your default ruby to ruby-1.9.2-p180 enter:


$ rvm --default use ruby-1.9.2-p180
$ rvm list


The above lines will display which version is the current default. You can further verify with:


$ruby --version
$gem --version


Next, set your default gemset to something bound to the purpose (say rails3)


$ rvm --create use 1.9.2@rails3

This tells rvm to use ruby1.9.2 paired with the gemset called rails3. What happens is that rvm creates a separate set of gems to use with rails-1.9.2. If you should ever need to use another ruby and another version of gems, you can have multiple combinations hanging around, and simply tell rvm which one you would like to use. To verify the current gemset, type: 

$ rvm gemset list

You should see an entry for "global" and your special gemset: rails3. At this point, you should update your gems to the latest. This is done by:


$ gem --version
$ gem update --system
$ gem --version


You should see the version jump to the latest (currently 1.8.5.) There is some debate on whether any of these commands should be run as root, but I would not recommend it if everything appears to work. There is no sense giving root access to programs that do not need it, especially on web servers.


Installing rails (optional)


That should be it for the ruby installation. You can run ruby normally as the current user without worries. RVM basically intercepts calls to ruby and its utilities, and allows you to have everything installed in your local directory, as well as multiple versions of Ruby and Gems, which are kept as "Gemsets". 


Before installing Ruby on Rails, it's worth visiting the Rails website to determine the correct version. At http://rubyonrails.org there are three available, the release candidate (unstable), the dot-zero version (too old) and the bugfix version. In this case, the bugfix version is 3.0.7 so we will install that:


$ gem install rails --version 3.0.7


All dependencies should binplace locally (in the .rvm subdirectory) and without 
the need for root access. You should be able to see what's installed with:


$ gem list


If any troubles occur, or there is a need for build-time modifications, you can always poke around in the sources in the .rvm directory to customize anything you install.


Maintenance


$rvm get head
$rvm reload

Thursday, February 24, 2011

The misery of Nth Party Packaging Systems

<mood - very annoyed>

I have a beef. It's about the compatibility between various Nth tier packaging systems on any given platform.
When you build a software package, it should just work. A software package to be installed on FreeBSD should not be revealing the developer's personal environment settings, (like trying to look for files on the dev's Apple laptop paths, and spitting out error messages about it.)

This is the strange case of the ruby gem called ZMQ, which is the Ruby interface to the ZeroMQ libraries. It essentially makes ZMQ usable under Ruby.

This is 5th party software.  Third-party software is managed under FreeBSD by the ports collection. The ZMQ libraries install just fine. But you're better off using RVM (Ruby Version Manager) to manage your ivarious installations of Ruby on your system. RVM manages 4th-party software. This is because it one level below a third-party packaging system. The GEM packaging system (Ruby's module installation system) is 5th party software, because it runs under RVM.

FreeBSD->Ports->RVM->GEM
                    3rd      4th        5th

I don't know what second-party software is.

Basically, we have too many packaging systems here on a single platform. They all work fine if you don't try to get them to work together. But everything falls apart when you expect them to interoperate without problems. The GEM installer under RVM under FreeBSD simply cannot figure out where to find the libraries it needs to build GEMS from scratch. It is completely unaware of the location that FreeBSD normally installs libraries, (/usr/local) and so you have to dig through a zillion layers of logfiles and directories to figure out which Ruby module failed, and why, and how to make it work.

This takes away from quality programming time. Seriously. Hours.

What would fix this is some kind of  SAAS RESTful server at FreeBSD that the GEM installer could query, based on the output of uname -a, the gem that's being installed and what package it is looking for. The RESTful server could respond with package meta-data giving installation locations of dependent software for this version of FreeBSD. This would be a cool idea, because people who write GEMs probably don't know the standard locations of installation and library paths for all the possible operating systems out there. Nor do they know which subsystem their package is being installed under.

Such meta-data servers could provide information for literally any OS, including Windows and Linux. Each of these operating systems have unique locations for software installations.

The goal being to rely on standard installation locations for any given platform so you can give people the "IJW" experience (It Just Works - a term coined at Microsoft) when installing stuff.

Wednesday, February 23, 2011

The value of taking breaks

One of the things I keep re-discovering when working on little software projects, is the value of thinking about stuff away from the computer.

Today, for example. I spent two hours mucking around @thecoffeeshop, debugging a Javascript file on the CR-48. Mostly problem discovery mixed-in with trying to come up with an answer with code. It was confounding, because each of my attempts to fix would not work. I was also fixing little things along the way, and many of my ancillary thoughts were related to the act of coding and using the computer.

It started to snow, and got very cold, so I gave up and walked back home.

During that time, about 20 minutes, I thought lightly about the problem (solution analysis) along with stuff not related to the programming issue, such as my schedule for the rest of the day, the weather, etc. When I got back home, I setup my dev enviroment, and basically fixed the problem in 5 minutes.

What does all this say?  I think a few things can be learned from my little 'case study':

Fixing little problems is mostly what development is about. Particularly informal, incremental development - but this just as easily could apply to planned development. There are always going to be technical obstacles in implementing, and your challenge is to overcome each one and close-in on a finished product.

For each of these little obstacles there are little phases you take to solve them.

1. Research - find out what you don't know. This can take a little while sometimes. For this particular problem, I had to do some research on bit-mapping and codepages the night before. Little bits of test code were written as proof-of-concept. The takeaway was not really a solution, but a realization of that I needed to  do more work, to satisfy certain dependencies in order to find a solution. Frustrating and messy.

2. Problem Discovery - Actually examining and testing existing code to discover what it wasn't doing. Frankly this was also messy. When I got tired of trying to analyze what was going on, I ended up doing code-housekeeping as a 'break' only to change things enough to the point I was not certain the environment was the same. I ended this phase feeling annoyed with myself.

3. Solution Analysis - Interestingly enough, away from the computer. I couldn't just impulsively follow hunches, doing research, testing code snippets or cleaning-up existing work. As a diversion my thoughts were interleaved with other totally unrelated matters for about 20 minutes. The takeaway from this was a feeling of being less frustrated, tangled. I also had a vague suspicion about what I had been doing wrong. Thoughts were bubbling-up naturally rather than being a kind of forced slavery that I often put my mind through when trying to solve something.

4. Trial Implementation - About 5 minutes to a solution (or at least substantial progress toward one.)  I did do minor layout housekeeping first to clarify the toilet-paper roll of code. I was also using better equipment and ate lunch while I was working. I rolled back to a previous version I knew was better quality, and resumed work from there.

I don't regard myself as a very good programmer. I have struggled with this fact for awhile. I take heart in criticism of co-workers who have said in the past I tend to make things too complicated. They are correct. I also take heart in the fact that almost every project I worked on was not entirely my work. Contributions of other people are ever-present. It helps to remember these humbling facts, because being open to a little self-criticism probably makes you more effective at what you do.

Thursday, February 17, 2011

Basic Javascript Call Types

Some of the hardest-to-understand things about Javascript are also some of the most basic concepts to any programming language. Take simple function declarations. Just input-output, parameters and an algorithm, right? Well, not quite. It's actually a bit more complicated.

Javascript is a dual-style language. You can program it procedurally, like C, or in an object-oriented manner, somewhat like Ruby or Perl. The basic function/class declaration takes the form:

function Name (parameter_list) { code_block }

The form above is really just syntactic sugar for function pointer assignment syntax:

var Name = function (parameter_list) { code_block }

Name is an alias for an anonymous function pointer. Furthermore, every object instance in Javascript is more or less equivalent to assignment of a simple anonymous hash table:

var Name = { code_block }

Above, we create an object called Name. There is no prototype (ancestral shadow object) attached to Name, as you would normally get if you used new() (which you can't with a hash), but member methods and properties assigned inline or posthumously will function just as if they were part of an object.

Similarly, a function declaration also doubles as a class declaration in Javascript:

// namespace collision wrapper START
(function () {

/* this is our dual class/function */

function Klass(param) 
   {
   this.inline_property = "I'm an inline_property.";
   this.inline_method   = function () 
      { 
      return "I'm an inline_method."; 
      }
   return "I'm a return value.";
   };

/*********************************/
/* add some members posthumously */
/*********************************/

/* Add static members to Klass */

Klass.static_property = "I'm a static_property."; 

Klass.static_method   = function ()
   { 
   return "I'm a static_method."; 
   } 

/* Add instance members to all objects created from Klass */

Klass.prototype.instance_property = "I'm an instance_property."; 

Klass.prototype.instance_method = function foo()
   { 
   return "I'm an instance_method."; 
   }

/**********************************************************/
/* Now let's see the different types of calls we can make */
/**********************************************************/

var d = document;

/* call as a plain old function */
d.write ("[Call as a function]<p>");

// prints "I'm a return value."
d.write ( Klass("parameter") + "<p>");

/* call as a static Class */
d.write ("[Call as a static class]<p>");

// prints "I'm a static_property."
d.write ( Klass.static_property + "<p>");

// prints "I'm a static_method."
d.write ( Klass.static_method() + "<p>");

/* call as an instance of Klass (an object) */
d.write ("[Call as an Object]<p>");

K = new Klass();

// prints "I'm an inline_property"
d.write ( K.inline_property + "<p>");

// prints "I'm an inline_method"
d.write ( K.inline_method() + "<p>");

// prints "I'm an instance_property"
d.write ( K.instance_property + "<p>");

// prints "I'm an instance_method"
d.write ( K.instance_method() + "<p>");

}) (); // namespace collision wrapper END

The point of the code above is to show that a function declaration can double as a class declaration. If the function Name (or Class Name) is new()-ed, it has nearly the full capability of an object. So not only is an object mostly interchangeable with a hash, but a class is mostly interchangeable with a function.

Class <=> Function and Object <=> Hash


How's that for confusing? Little old Javascript is not as simple as it seems!

Monday, February 14, 2011

Some first impressions of Javascript

In the past couple of weeks I've been learning Javascript. I guess I put it off for so long because I thought it would be simple to pick-up and learn. Unfortunately, I've discovered that's not the case. The complexity (for want of a more suitable term) is definitely an encumbrance to learning.

After two weeks of reading and writing, I view Javascript as a comparatively crude procedural language with an overly-liberal free-form structure that relies heavily on programmer technique and understanding of internals to code solidly. For the beginner, the language seems quite confusing, but it's possible to write stuff.

It seems there is no 'correct' approach to author good code. Should you write Javascript in a defensive, yet contrived, object-oriented manner or a direct, simple functional manner? Is the goal to write small code or to write protected code? There is plenty of documentation out there that will advocate both approaches. Unlike C and C++ there is no single, revered, authoritative source of information.

Javascript seems to be a language that has evolved to keep pace with changing standards in HTML and associated technologies. It has a big job to do. It integrates fully with and can substitute for many functions of CSS and HTML, of which there are many versions.

Javascript has internal libraries to support a bunch of things to do with browser content, control and interaction, and many external libraries such as JQuery, Prototype, Dojo, YUI and others that will do anything from compensating for deficiencies in Javascript to offering complex widgets, like popout calendars, forms and animations.

In fact, when all the hens come home to roost the amount of information to learn is mind-boggling, disorganized - and complicated. To further steepen the learning curve, programmers must consider issues related to performance, browser compatibility, event handling, security, scoping, user and programmer friendliness.

There are not many languages that compare to Javascript. Simply the fact that the interpreter is a specialized language hosted by a web browser host makes it unique. But Javascript carries some noticeable similarities to other scripting languages like Perl and Ruby.

Like Perl, it is not really object oriented, but can be carefully used in such a way as to mimic object-oriented features to provide some form of inheritance, member access protection, and possibly classes and class members. I don't think polymorphism enters the picture since there are not really any meaningful types. Like Ruby, it's object-oriented model is soft, mostly typeless, and you can mutate "objects" with new members easily after instantiating them. The idea of hidden prototypes is similar to Perl's symbol table and Ruby's "shadow classes". Execution context and reliance on closures are other things which bear similarity to Ruby and Perl - but closure oriented programming is hard to understand and diagnose.

The whole "a function is an object is a hash" thing perhaps has gone too far when you discover Javascript arrays are quite inefficient, being hash tables (or at least associative arrays) themselves. And there are no integers. Or something called the scope chain (still learning about that) can easily kill off performance if you aren't aware of its behind the scenes behaviors.

But in the end, the language itself is double-edged, with features that are simultaneously beneficial and detrimental. I suspect it is a language of trade-offs.

I should mention the supporting toolsets, especially debuggers, are fairly low-quality when compared to C and C++ counterparts, although the profilers are pretty good. It depends on the browser. Probably the best debugger I've found is the one included with Chrome - but it goes off into the weeds, has a few questionable behaviors, and isn't comprehensive.

It's a little sad to think that a language so muddled is the pillar of the future -- the future being cloud applications. Yet, this is kind of the regressed world we live in, muddled and uncertain.

Saturday, February 12, 2011

Cloud-based IDEs in the Perfect World

One of the complaints I see on mailing lists for the Google CR-48 laptop is the lack of development tools, particularly the absence of Java. As a developer it can be a bit frustrating to have your favorite portable computing machine bolted into a scantly-equipped programming environment.

But the CR-48 is designed to embrace the concept of cloud computing. This means in a Perfect Universe, all tools you would normally expect to find on your typical desktop will have been moved to the internet as web-based software. In this Universe, you no longer have to install, maintain, synchronize or otherwise deal with software installed on your laptop, desktop or work computer. This will all taken care-of at various data centers in the cloud (AKA the internet.)

So if you want to open Notepad, you will just point your web browser to the notepad application on the web. If you want to compile some C++ with the GNU compiler suite, you'll simply point your browser to an AJAX application which will give you a command prompt. If you want to debug a C# application, you will just goto Microsoft's website where they will have your personal copy of Visual Studio online, complete with all your projects. Everything in the Perfect Universe will happen through your browser, so there is no need to store anything on your computer. These earthly things will be left behind.

But in the Not-So-Perfect-Universe none of these web applications actually exist yet. And we should be mindful of the fact that these type of applications are incredibly complex and expensive to build - much more so than the original desktop applications they will be designed to replace. The technologies needed to elevate us to the Perfect Universe are fragmented, distributed and kludgey.

This is not to say it won't all one day come true, but it aint gonna happen in 2011.

So for a code geek using a cloud-based appliance like the CR-48, the frustration is understandable. The CR-48 in user mode grants no access to the underlying file and operating system. So you can't install all those juicy developer tools and languages.

(You can go into developer mode on the CR-48 and install Linux, but it's not playing fair by the cloud, so we will ignore that option in this discussion)

If you were thinking 'Java applets, maybe I can do those', I am sorry but Chrome OS doesn't speak Java. Not even in the browser. However, it does fluently speak Javascript if that's any concession. Not quite the Cockney of computer languages (like Perl) but more like pigeon english. The compensating factor is the Chrome browser Javascript engine - V8 - which is essentially Tornado in a can. It speaks Javascript very fast. If you live in the cloud you'd be crazy not to use it, whatever harp or computer you choose to play.

Given this future, it's amusing to think the Javascript language is going to be critical to the success of living in the Perfect Universe. The language is so limited it begs the black arts of the high coding priests whose bag of tricks will make it good again. Apparently, the services of this elite priesthood will matter, because research has shown that even 500ms web application-induced latency will drive away 20% of your online customers.

So in this Ominous Perfect World, if you are in a pinch and must use your web browser for all things, here is a short list of browser-based development tools that look promising:

Jgate Free IDE + hosting for Javascript & Java apps.

ShiftEdit an IDE for several languages.

Good luck.

Friday, February 11, 2011

Win7 vs. Ubuntu vs. CR-48 boot times

I just completed a little informal testing of boot times between Win7 and Ubuntu 10.10. Just for fun I included times for the same operations on the Google CR-48 netbook laptop running Ubuntu.

Amazingly, between Ubuntus, the desktop machine with somewhere near twice the processing power was at least 50% slower on booting than the Google CR-48 laptop. And Win7 was approximately 50% slower to achieving a state of usability than Ubuntu on the same desktop machine.

Why is the CR-48 so fast? Well, the BIOS boots about 4x faster. Plus Ubuntu on the CR-48 has no swap file to contend with, and therefore no disk thrashing (even if you could thrash a drive with no moving parts.)

However, comparing Ubuntu and Win7 desktop boot performances was a bit more subtle. Superficially, each OS had fairly close stage booting times to each other, with main exception being disk spin-down after a stage was reached. Ubuntu had much less disk spin-down than Win7, and thus became usable much sooner. I think this is mainly due to the way Win7 loads its drivers and prepares its files.

Disk spin-down was measured by when the drive light stopped being solid. From a practical perspective the OS is basically unusable until disk activity allows for some breathing space. Performance can even be made worse if applications are launched before the disk stops intensely thrashing.

Informal boot time comparisons
Stage Win7 on Intel Desktop Ubuntu 10.10 on Intel Desktop Ubuntu 10.10 on Google CR-48 laptop
Power on to Boot Select 20s 20s 5s
Boot Select to Login Screen 27s + 23s spindown 27s + 1s spindown 15s + 0s spindown
Login Screen to Desktop 10s + 30s spindown 18s + 1s spindown 1s + 0s spindown
Desktop to Power Off Shutdown 20s-35s 5s 2s

To be fair, the Windows machine had more software to load that would not be normally found on a Linux machine. Although both machines have the same commercial video software drivers, the Windows OS has virus protection software and a few other things to load before becoming usable. But this is still a design issue. Linux simply doesn't thrash the disk by design.

Some extra technical details:

Both machines are 64-bit dual-core Intel architectures with 2G RAM. All operating systems are 32-bit software. All machines have similar usage wear and tear, and a moderate amount of software installed on them. The big difference in the hardware is that the CR-48 is somewhere between a net-book and note-book, and the desktop is a typical desktop with a 0.5 Tb mechanical hard disk, mid-range video card and 2.6 Ghz processing power.

Thursday, February 10, 2011

Upgrading Ruby on FreeBSD

The Ruby language has fairly frequent bugfixes and minor release versions. Keeping up with this can be challenging, but with RVM (Ruby Version Manager) it's pretty much a snap.

Today I upgraded my Ruby installation on FreeBSD from 1.9.2-p0 to 1.9.2-p136, with a pretty substantial number of fixes added from one version to the other.

On FreeBSD, you should avoid using the Ruby ports altogether, and let the wonderful little RVM shell script manage your installation. It will save hours of messing around with the Ruby and Gems ports, which don't play well with Rails :-(

RVM has a real talent for managing multiple installed Ruby/Gems installations - all in a hidden .rvm directory in your home directory. It requires no root privileges to use. It also updates itself with ease and appears to have a very dedicated author. If you don't already have RVM installed, you can get it from http://rvm.beginrescueend.com/rvm/install/

One might be doing ones-self a huge favor also by following Michael Hartl's simple instructions for bootstrapping Ruby. He too is fastidious about maintaining his instructions, which are kept fresh and current:


In any case, upgrading Ruby is a fairly painless process:

1. The first step is to upgrade RVM itself with: $> rvm upgrade

2. Next, reload the RVM shell script: $> rvm reload

3. Then, tell it to install the new Ruby version: $> rvm install 1.9.2-p136
(this will download, compile and install from source, so wait a while)

4. Running $> rvm list will give you a list of all the Ruby installations RVM is managing, with an arrow pointing to the default.

5. Now just tell RVM which Ruby to use as default $> rvm use ruby-1.9.2-p136 and check again with rvm list to verify the new default Ruby installation. You can also use $> ruby --version to double-check.

Wednesday, February 9, 2011

Just started this blog. Hopefully I'll have enough time to fill it out!