Saturday, March 23, 2013

Shelling it again


Sadly again, I find myself using the punishing /bin/sh to script tasks in FreeBSD. And again, I am reminded how painful and time-consuming it can be to do some of the simplest things.

One of the first things you should realize is that sh is not Bash. It is Bash featureless ancestor - a kind of pygmy caveman. It is smaller, faster, and somewhat harder to work with. It has nearly meaningless error-messages.

One thing I am absorbing about the language this shell speaks is that variables take several forms.

1. Left-hand side name. Ex: dinosaur="Dino"
In this form, a variable is recieving a value. 
2. Right-hand side name. Ex: animal=$dinosaur
In this form, $dinosaur is a variable is being used for something. Here, assignment to another variable.
3. Formal name. Ex: ${dinosaur}
The same as $dinosaur, but in this form you can change the entire tone and context of the name, like _$(dinosaur)!!_, which will translate to _dino!!_. 
4. Verbing-name. Ex: animal=$(dinosaur)
Althoug it looks like it, this is not a variable, but a command to do something in a shell. In this case, we are trying to call a program name "dinosaur" in the operating system. If dinosaur exists and replies, $animal will hold the response.
5. Deep verbing-name. Ex: $($dinosaur)
Just like the verbing name, but specificlly calling the program "Dino" in the operating system.
6. Subshell verbing-name. Ex: `dinosaur`
Here, we are calling a subshell to try to run the program "dinosaur".

And so on. It's worth noting the shell has two parts:

1. The enviroment storage area
2. The shell storage area

The environment contains all the variables that have been exported to semi-protected storage, and will persist throughout the shell's various operating modes and subshells. However, the shell's unprotected variable storage area can be assigned-to without using the export command.

This is important in shell programming, because when you assign some value to a variable, names and values are stored in this unprotected environment. Unlike programming languages, the shell has no other way to store names and values. 

When you run a script from the prompt like this:

> scriptname.sh

It runs in a subshell, and all the unprotected storage it used is destroyed when it returns, but the semi-protected environment is transferred to the subshell. When you run a script like this:

> source scriptname.sh

it is not run in a subshell, and all the variables and values the script created are retained in the unprotected environment. This can have ramifications between subsequent runs if you don't reset your variables in your script.

It is also true that all variables are global by default in a script. So although these variables are often destroyed after the script has completed running in a subshell, you can get into trouble by assuming names are localized inside the body of the script itself.

Shell scripts do have callable functions, but they can't return values other than exit codes. If you want return values, you have to assign a global variable the role of holding a return value from all functions, and clear that variable at the beginning of every function. This is called defensive programming.

So the deal is, you can have functions effectively return two values: the status code (an integer, usually indicating failure or success) and a string containing some result data. 

Below, is a function to take a pathname as a string and and clean out duplicate slashes from it, returning it as a string. It also returns a status code, on whether the clean path was found (0) or not found (1). If the function was handed an empty input string, it returns (2) and if some other error occurs, it returns that too.

The test driver for the function basically calls the function and parses the results. The input to the test driver are various unclean paths.

I think it shows functions are viable in sh scripts, which can serve as an advantage to programming these things. But getting the syntax and behavior correct is actually quite hard to. 

Using Perl or Ruby, something like this can be done in minutes. Doing it in shell can take a lot longer, especially because it's a blind man's walk of trial and error, especially if you're used to more sophisticated languages where true is true and false is false; not having to call external programs for regular expressions, or are unaccustomed to built-in variables disappearing if you don't capture them right away.

Resources

Learning the Bash Shell (Newham & Rosenblatt, O'Reilly)
Easy Shell Scripting (Cherian, Linux Gazette)
returning value from called function in shell script (Stack Overflow)
Return a single value from a shell script function (Stack Overflow)

Thursday, March 21, 2013

Upgrading a -STABLE ZFS system with clang


One of the technologies I was interested in testing along with ZFS was the LLVM-based clang compiler suite on FreeBSD, which is currently under integration and slated officially to replace the gcc/g++ compiler suite in FreeBSD 10.  Right now, clang is in the 9-STABLE base system alongside gcc/g++.

Clang has a lot of virtues compared to gcc/g++ among which are:

  • Better, more informative error messages
  • Better compliance for c++
  • Better support for IDEs and diagnostic tools
  • Uses less memory
  • Has JIT support
  • Isn't monolithic, as is gcc/g++
  • BSD license, so commercially viable.

This is one of the things I love about FreeBSD. It's always kept as commercially viable as possible with the BSD license. There is no reason to use open source software at work or home that has restrictive licensing. Adoption of clang is just another good reason to use FreeBSD.

Instructions for building the OS with clang are located at: https://wiki.freebsd.org/BuildingFreeBSDWithClang

This page has some useful tips, and there are a couple worth mentioning here. If you want to buildworld and your kernel with clang, you have to enable the clang suite in /etc/make.conf to replace gcc/g++:

CC=clang
CXX=clang++
CPP=clang-cpp

If you do a:  # make buildworld kernel with the default command line options, it should work just fine. Nothing further to do.

However, if you're feeling cautious and want to test the kernel itself first, run the following from /usr/src:

make kernel KERNCONF=GENERIC INSTKERNNAME=clang

The line above builds a GENERIC kernel called clang, but places it into a separate directory along with its modules in /boot. This way, you leave your previous gcc kernel in place, yet can test the clang kernel easily with some intervention at the boot loader.

If you go this route, when you reboot, drop to the bootloader (option 2) and enter the following to change the module path to boot into the clang kernel:

set module_path=/boot/clang
boot clang

If the kernel panics, there is no need to do so yourself. Just reboot into the old gcc /boot/kernel without intervention, and it will load by default. If you moved or destroyed the old kernel instead, and the option isn't available you can always opt to load /boot/kernel.old and its modules instead from the boot loader.

However, a clang boot will likely work, and the system should boot and enable you to make kernel buildworld, installworld, etc. This worked fine for me, but on one machine there was a kernel panic bootstrapping the system due to (probably) module installation paths, which was fairly easy to address.

For some reason, the clang kernel was loading but clang modules were not. I decided to reinstall the kernel again, leaving off the KERNCONF=GENERIC build option:

make reinstallkernel

Which did the trick.

Once booted, a subsequent view of dmesg will show which compiler was used to build the kernel.

I will be experimenting with this ZFS clang build setup for a little while, and note some of the issues I come across in following posts, but so far, ZFS and clang kernel/OS are performing nicely on my little underpowered Toshiba NB 205 netbook!

Note: Rebuilding the kernel/OS worked fine on my server machine - an old Celeron 1G box.

Dotfile map for several shells

Here is a preliminary dotfile invocation map for several FreeBSD shells:

  • /bin/sh
  • /usr/local/bin/bash
  • /usr/local/bin/ksh93
  • /bin/csh 
  • /bin/tcsh. 
This information was taken from manpages and not completely tested on my own systems, but you can see how important execution context (aka: interactive, login) is to dotfile script execution. Actual out-of-box behavior can vary, depending on what is being done in pre-installed dotfiles.

For example, for /bin/sh and ksh93, the local environment variable ENV will determine at execution-time whether the shell runs an additional dotfile on login: either .shrc or .kshrc. And I still may not have the rules down correctly. It's very easy to be wrong when trying to describe this behavior in simple terms.

My general impression is that this needs to be harnessed and organized.

I am contemplating testing a system of indirect links to shell dotfiles, using the following scheme:

1.) Place all real dotfiles from all shells in a ~/shell-dotfiles directory, with their names prefixed by "DOT" as in DOT.kshrc and DOT.profile.

2.) Creating a ~/.sane directory containing the following subdirectories:

  • interactive
  • login
  • neither
  • both
  • (non-interactive?, non-login?)

Under each of those four subdirectories, have a subdirectory for each shell, containing links to files read in that particular mode.

So, for example, dotfiles executed in a Bash interactive-login mode would be linked (either hard or soft) under ~/.sane/both/bash, to their corresponding real dotfiles in ~/shell-dotfiles. So specifically, .profile (as one case) would have a link under ~/.sane/both/bash/DOT.profile to ~/shell-dotffiles/DOT.profile.

3. Lastly, links would be created from the dotfiles expected locations in $HOME to the corresponding link in ~/shell-dotfiles. So, for example $HOME/.profile would link to ~/shell-dotfiles/DOT.profile,.

So, there would always be one place to store the real files flatly (~/shell-dotfiles) and conspicuously so there is never a name conflict. Secondly, there is a a single place to store shell dotfiles specifically, used or unused. Good for spot backups. Also, if hard links are made, there is some referential integrity to the whole thing.

The purpose of the ~/.sane directory would be to have a place to edit a file based on the shell and particular login scenario, like interactive-only or login-only. It would be very clear  when editing in the ~/.sane directory that you were editing for a particular context, and could be better informed about the expected set of consequences.

Wednesday, March 20, 2013

ZFS: A truly Superior Filesystem


My wife just recently got a new HP Chromebook, causing her to rapidly abandon her 3-year old Toshiba NB 205 netbook. This gave me a new computer to experiment on, and of course I installed FreeBSD.

This is a very low-powered machine, with a ton of case-edge built-in peripherals. It has 2G ram, and an internal disk upgraded to 230G. It originally came pre-packaged with Win7 "Starter" edition and a bunch of Toshiba bloatware, which makes it the perfect target for an OS nuke and wicked post-nuclear experimentation.

As my new testbed, this machine is running some of the newest, up-and-coming features of FreeBSD, among which is ZFS. I used these instructions for "Road Warrior" laptop:

http://forums.freebsd.org/showthread.php?t=31662

If you have ZFS, Perl and beadm installed, I wrote this little shell script to dump information from a variety of sources on FreeBSD


What is ZFS exactly? The skinny is that it's Sun's newest(ish) file system that seriously improves on anything else in existence right now. By far and away, it is the most seriously sophisticated file system out there today. It's on OpenSolaris now, and FreeBSD developers have been quietly tooling away at it for a few years now. I expect/can only hope it will ultimately replace UFS.

http://hub.opensolaris.org/bin/view/Community+Group+zfs/whatis

Some things ZFS has that other FS do not:

End-to-end Data integrity: According to Wikipedia's ZFS article (https://en.wikipedia.org/wiki/ZFS) about 1 in 90 hard drives have undetected failures that neither hardware nor software can normally catch. This phenomenon is called "silent corruption", and is experienced at large data providers and small ones with cheap hardware. ZFS can be employed in these cases, to detect and repair silently corrupted data because it uses all kinds of mechanisms to validate and store data that raid doesn't.

Snapshots and Boot Environments: Similar in concept to Win7 restore-points, this feature gives you the ability to create perfect bootable copy (a snapshot) of an existing OS. On the Toshiba, it takes only 3-5 seconds, and uses a nominal amount of disk space (try that on Win7). You can clone these, boot into them, destroy them, mount them or even export them to another system. The boot configurations concept is from Solaris. If you install a copy of a utility called beadm from the ports tree, it emulates Solaris' nice interface in FreeBSD, and offers even more elegance than using the two management utilities: zpool and zfs

No fsck: ZFS uses a maintenance technique called "scrubbing" which is run periodically, as frequently as you would run an SSD optimizer or defragmenter. Scrubbing, unlike fsck, can be run on an online, mounted, active disk, and checks not only metadata, but the data itself for corruption. Auto-repair is done via RAID-Z (ZFS software raid, another feature) or by a kind of on-disk bit replication mechanism which looks into redundant copies for good replacement data. In ZFS, copy-on-write semantics are used, so data on the disk isn't corrupted the same way it can be on a normal file system.

No partitions or volumes: Hardware is organized using into datasets and zpools, or virtualized storage. No formatting, slices or fdisk. You can easily create filesystems within these pools, and other filesystems. You can add disks in as mirrors from the command line, impose quotas on filesystems, reserve storage, share storage, compress, have transactions, and no real limits on numbers of directories, number of filesystems, paths, and other things normally imposed on file systems.

And there are more features. It's almost mind-boggling. ZFS seems to do E0VERYTHING right, and it's just too good to pass-up.

http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
https://wiki.freebsd.org/ZFS



Monday, March 18, 2013

On the quagmire of shell dotfile execution


Have you ever tried to get a third-party full-color 4-directional scrolling pager (aka: most) working under bash so you can see manpages in full color, and have programs like mergemaster call up a special viewing window rather than just scrolling stuff across the screen? You are in for a treat if you use FreeBSD.

Getting this to work is not as easy. And it's all related to the dot-file quagmire underlying every *nix terminal session. The nature of this quagmire is the quad-mode operation of *nix shells and all the dotfiles they potentially execute, depending in which mode the shell is run. And a bunch of other stuff.

Shells generally have two status modes of operation under which they are run: as a "login" shell or not, and as an "interactive" shell or not. A login shell means the shell goes through the login process before it runs. An interactive shell means the shell offers a prompt and waits for user input. Since these modes of operation are either on or off for any given shell session, this gives rise to four potential states of operation of a shell, depending on what kind of task you perform.


  1. A login-interactive session, like logging-in and browsing through directories at a prompt (which is what  most of us mostly do.)
  2. A session that is neither login or interactive, like a scripted cron job that is run at 3 am the first Thursday of every month.
  3. A login-only session, like an ssh remote execution command where you tell ssh to login to a machine, do something, and logout again. (I don't do this a lot, but have needed to on occasion. It feels hackish.)
  4. An interactive-only session, like running a subshell when you are already logged-in, like running one shell from another. (For example, I am at the bash prompt, but want to use sh, so I just type "sh" at the prompt.)


The complication arises out of which dotfiles are executed in for each mode. Depending on how you are running things; the context of the shell's use; and what other shells are installed, different dotfiles will execute in non-obvious ways. There are even system-wide dotfiles in /etc that are searched, and some shells (like fish) will store totally non-standard configurations.

For example, if my shell is /bin/sh, and I login to interact with it (scenario #1) on FreeBSD, a file called .profile in my home directory is executed, followed by .shrc, that has effectively been "sourced", or included from the .profile script by getting placed into the environment. If I have the c-shell as my login shell, it's .cshrc, then .login is executed. If I use bash, .bash_profile is run. But then again, it entirely depends on what system you run and what shells (and dotfiles) you have installed.

In scenario 2, you should generally expect few dotfiles to be called, if any. Scenario 3 might give you half of what you expect, and scenario 4 might give you another or the same half.

To amplify the madness, shells that are related or in the same family (csh begat tcsh, sh begat bash) will often search-for and use each other's dotfiles, if present. Sometimes, the order of their searching isn't clear, it's just whichever they happen to come across first, file a, file b, or file c.

Worse yet, you may have no choice if you want to run any important 3rd party frameworks, like RVM or Git, which depend on the presence (usually) of bash.

This ... chaos has a net effect of making a monumental task out of a simple thing installing a like a pager like most to behave in the way you would expect.

So, here's what I did -- and it sort of works, for whatever it's worth.

First, I went into all the dotfiles and put the line echo "Running $HOME/<dotfile>" near the beginning of each file, to see what exactly was executing when I logged in or ran things in various modes.

Next, I placed the command (PAGER=/usr/local/bin/most; export PAGER) in .cshrc, .mailrc and .profile, just so there weren't any local references to the pagers less and more (both of which I hate because they are unnecessarily painful). Bash has a built-in less-like pager it uses, and you gotta watch out for that one. It can be a real disappointment if it shows up unexpectedly).

Since I am a conscripted Bourne-shell-type user, I sourced .profile from .bash_profile and .login  to .bash_login. (Even though this sounds logical, I don't think the second one makes much sense. But there you have it. It's there for the feel-good factor.)

So far, all this seems to work - but not for sudo tasks yet. It's taken hours, and there are still things that don't make sense. But hey if it works, don't fix it.

Here's a resource that attempts to make sense of it all, and I still have problems with it.

https://github.com/sstephenson/rbenv/wiki/Unix-shell-initialization

Good luck!

Wednesday, March 13, 2013

A Notation for Functional Design with Binary Outcomes




I've been working out the design for a Ruby program that builds regular, plain old  jails over the past couple of days.

After identifying and reading-up on the problem domain, then writing a set of manual setup instructions for it (still underway[1]), I felt I knew enough to begin design. The program I'm writing is kind of a warm-up to eventually writing a script that installs service jails, which is an advanced setup task.

The program operates at the "glue" layer of coding (see last blog post on Object Oriented Design) and I began writing the thing using the Bourne shell. After encountering a problem with pattern-matching command line options, I became very discouraged about the extremely limited capabilities of the Bourne shell. To the most simple tasks we take for granted in Perl, we have to call two or three other programs in Bourne, and call a sub-shell and use the environment to do it.

The only reason to use Bourne shell scripts is to follow the old Unix grind that states "It's good engineering practice to employ technologies that preexist in the OS, because someone might not have XYZ dependency installed on the system to make use of your program"

Although this may be true, after reading about the speed of Perl one-liners against the awk and sed counterparts for pattern-matching, I don't understand why Perl was taken out of FreeBSD in the first place. Absolutely obstinate adherence to tradition. Perl was part of the base OS until a political rift swept away some of the FreeBSD leadership post-911/dot-com implosion. But what little is wrong with FreeBSD these days is another story.

In any case, I felt like loosing my cookies when faced with the prospect of using such primitive, awkward technology just to live up to an old UNIX grind. So I decided to choose my kind of tool: Ruby!

I decided, as an experiment, to design my jail-building program from the outside-in, with an emphasis first on user interface. I wrote a few short shell scripts with the prefix "simple_jail" and tested them to see if they covered all my use cases from beginning to end:

simple_jail_config
simple_jail_init
simple_jail_start
simple_jail_stop
simple_jail_jump_in
simple_jail_ssh

These scripts have no options, logic, control flow or conditional instructions in them. They are simple linear sets of commands that just work for a single, fixed configuration.

Next, I translated these items into command options:

simple_jail <configure|initialize|rootlogin|sshlogin|start|stop> <ip_address> .. <subargs>
simple_jail configure <ip_address> <hostname> <username> <password>
simple_jail initialize <ip_address>
simple_jail start <ip_address>
simple_jail stop <ip_address>
simple_jail rootlogin <ip_address>
simple_jail sshlogin <ip_address>

These are the options I could realistically support.

At this point I researched my choices for options-handling. I could either write this myself, use one of two Ruby libraries, or use one of several third-party gems. Each of these packages has somewhat limited features, and may or may not support my command line schema above. Although one of the Ruby built-ins looked like a good candidate, I needed to verify exactly what I would be doing for validation checks once I actually obtained the user's options. So I began to write a long set of logical rules like the ones below:


# check arguments
#
#   if there is one and only one valid option provided
#     and one and only one ip_address provided
#   pass
#
#   if option is configure
#     validate confugure subargs
#       one and only one hostname
#       one and only one username
#       one and only one password
#   pass

At that point, I began to ask myself: what would the supporting function names be for doing systems checks on these options rules? And what other checks would I want to do globally? For example:

# Process_and_environment_checks:
#
# script_running_in_jail? true : false
# script_running_as_only_copy? true : false
# script_running_as_root? true : false
# jail_already_running? true : false


I kind of put together a list of function calls that represented checks to options and the system globally, and found myself using a concocted notation to describe sequential program behavior that was helpful in thinking about algorithmic steps.

It's based on the true false short testing line found in many languages which takes the familiar form:

<expression> ? <expr if true> : <expr if false> # documentation

This idiom takes an expression (left), evaluates and tests it for true or false,then evaluates one of the two following expressions for a return value. The true return expression is after the "?". The false return expression is listed after the colon ":". It's kind of a shorthand boolean method.

In my twisted version of this idiom, I use it to represent a line of code in a sequence of function calls in a very high level, hypothetical language:

<function_name> ? <happy-ppath return value> : <sad-path return value> # error message describing sad path

For example, some function calls using this notation return only true or false. These are status checks. For example:

jail_dir_exists ? true : false

The method jail_dir_exists does the checking. The return values are a simple booleans.

In other cases where I used this notation, a function call is made, and if it succeeds, it takes the happy path, and if it fails, it takes the sad path: For example, a function that adds a new jail entry to the configuration file:

configure_add_new_jail continue : exit # could not create new entry

In the case above, the function configure_add_new_jail is called. If it succeeds, the program proceeds silently to the next step in the program sequence. If it fails, an action is taken: exit the program. The part to the right of the hash is the error message sent to the console on failure.

In a slightly more complex form of this idiom, a function calls another conditionally. For example, in a utility function:

user_jail_dir_destroy? jail_dir_destroy! : exit # true = user answers 'y'

Above, the user is prompted to destroy a partly-created or preexisting jail on the system by the user_jail_dir_destroy function. The precondition is that he has, at some point in the past,  run the script with the "initialize" option, which essentially runs "make installworld" and ":make distribution" of a new jail from the host's object tree. If he previously built this jail (partly or fully) he might be unaware of it when running the script this time around, and would need to be prompted for which action to take to prevent his previous work from being overwritten.

So the gist of the pseudo-code line above is to provide the logical plan for doing so. The documentation to the right of the hash describes the mapping between the user's response to the [y/n] prompt with the true or false return parameter. In this case, the one needing description is the true parameter, which contains a call to another function to actually destroy the jail directory:

jail.dir.destroy! continue : exit # true = chflags and rm -rf on jail succeeded

Above, if the destroy function fails to delete the directory, the program exits.But the interesting case is what happens if it succeeds, so that is documented with a message - which could be used in debug mode. In this case, if the operation succeeds, the function returns and the happy path is resumed.

So basically, for each option taken on the command line, I have been defining steps to be taken in this revised true/false idiom format. It's not only compact, but can be edited in a plain text editor as something of a low-level program specification.

The interesting thing is, if you get enough of these statements going, class names begin to emerge, suggesting an underlying object model:

host.configure.etc.dir.exists? true : false

jail.dir_empty? true : false

configure.file.entry_valid? true : false

Above, I can replace underscores with dots and get some idea of how refined I want to make the potential class structure. Note I don't have to know what the classes are *before* writing psuedocode. The steps in psuedocode collect, and define emerging class names as I go.


[1] The notes are my own, pilfered from a variety of sources, and should be taken with some advice and caution. They are still incomplete. :

https://docs.google.com/document/d/1y2c1O0mAagWD0Eypw0EuB_BbN5vm0MMjrrwLkvfnVcI/edit?usp=sharing

The Importance of Object-Oriented Design


Design Techniques

One of the more stifling features of object-oriented programming is trying to figure out what classes to design.

Unlike its venerated, comparatively simple tribal cousin, procedural programming, object-oriented programming requires you to invent abstractions in the form of classes that describe the problem domain, or selected parts of it - and package functions with data. The classes you design are blueprints from which objects are manufactured at runtime.

Depending on the language, there can also be inheritance, mixins, access permissions, generics, collections, interfaces, and even Eigenclasses that allow you to mutate the original classes and objects at runtime. And for the truly discriminating, pedantic programmer, there are design patterns.

The big advantage of object-oriented design is the ability to speak to the computer in terms of the problem's domain. The prodecedural approach makes you to speak to the computer on its own terms, requiring you to translate the problem domain into terms the computer understands.

But the object-oriented approach is actually just an extension of the procedural approach. It's just the procedural parts (both functions and variables) are heavily modularized and cast into types - protected data structures. We cannot escape the procedural nature of programming. Programs are implementations of algorithms, and algorithms are sequences of steps -- recipes for solving problems.

It follows that as a complex wrapper to procedural methodology, there will be more design decisions to make when using an object-oriented methodology. And design is all about how to get problem P to solution S with code. The design you devise can vary depending on what type of problem you're solving, and the tools you have available.

This is one of the reasons I am trying to leave Perl in favor of Ruby. I've done a fair amount of reading on Ruby, and don't care to delve more deeply into Perl. Unfortunately, most of my practice with Perl has been through jobs. I was always hired as a "perl programmer" no matter what I did until recently. So, although I've had plenty of exposure and education centering on object-oriented design, I haven't had much opportunity to use it - at work or on my own projects. There just hasn't been a need.

However, what little project exposure I've had, has been revealing when it comes to the task of designing classes. There is more than one way to do it. One piece of advise I can give about programming is that you must know your problem domain very well to successfully address it in code. This is especially important with object-oriented class design, as it requires you to declare the problem domain up-front.

So, getting class design correct before coding starts is very important. With reflection and Eigenclassess, it may not be as critical, because you can posthumously mutate objects at runtime with code. Although this violates some of the principles of class-based object-oriented design (encapsulation) it holds some promise of allowing softer, more simplistic cookie-cutter class designs to be planned before coding gets underway, which can be later be mutated and specialized as the project progresses. This saves time over devising rigid strongly-typed blueprints which become, in a way, immutable before coding begins. Reflection may allow for extra refinement to take place after the initial general design is settled-upon. So in this respect, there may be a tradeoff between encapsulation and flexibility with object design.

Using a softly-typed language like Ruby or Python over a strongly-typed one like Java or C++ still doesn't substantially relieve the pain of a blank slate when it comes to devising a correct or suitable class model to address a domain-specific problem, but perhaps it can reduce the size of the "blank page" you are required to fill in before coding starts.

On top of that, what layer of coding you are doing has everything to do with what language you might be using. By layer, I'm suggesting how close to the hardware you are. Below is a rough diagram of languages vs. layers for most computing problems:

Layer                                       Languages most used

Hardware
Software: No OS                      (Hardware Engineering: ASM, C, Verilog)
Software: BIOS                       (ASM, C)
Software: Bootstrap Loaders          (OS Engineering: C, Forth, ASM) 
Software: OS Kernel                  (C)
Software: Filesystems and Drivers    (C)
Software: Privileged User Mode       (Critical Operations: C, C++)
Software: OS utilities               (Standard Operations: C, C++)
Software: 3rd party system           (Application: Java, C#, C++, C)
Software: Services Frameowrks        (Services: Perl Ruby, Python, Java, C#, C++)
Software: Glue, Build, Admin code    (Production: Perl, Ruby, Python, Shells)
Software: Domain specific langs      (Integration: JavaScript, ERB, Jquery, SQL) 
Software: Entertainment, utilities   (Sourceless: C++, C#, Java, JavaScript)

Above, I have tried to rougly outline the layers at which coding occurs, and associated heirarchy of useage realms. Each realm can be composed of one or more layers, and there are no firm boundaries.

1. Hardware Engineering - making hardware do something. (embedded)
2. OS Engineering       - making hardware do something sophisticated. (kernel) 
3. Critical Ops         - making sure the OS is running. (critical utils)
4. Standard Ops         - making the OS fully usable.    (complete OS binaries)
5. Application          - making the OS extra usable.    (3rd party software)
6. Services             - making the OS serve an automated purpose. (servers)
7. Production           - using the OS to produce software and services.(glue)
8. Integration          - using client installs for consumption (web browser) 
9. Sourceless           - making self-contained applications (turbo-tax, games)

Whether or not you agree with the categories, there is a distinct migration of lanugages for the different types of code being written at each layer. The closer we are to hardware, the more lower-level languages we see being used. We start with assembly language and veriolg. The further we go from the hardware, the more higher-level languages we see being used, until we are no longer dealing with source code, or even commands, but end-user applications controlled by user interfaces.

Somwhere in-between the OS engineering layer and the critical operations layer we begin to see object-oriented languages being used. Somehere betweeen the applications and services layer we see the emergence of "pointerless" languages like Java. Somewhere around the services and production layer we see "scripting" languages emerging as the language of choice.

So, it's not too hard to see that when we get in a bit to the standard operations layer we see see object-oriented code come into use. And it doesn't go away after that. In fact, object-oriented languages become dominant as we move toward the end-user with his applications, whcih can be very sophisticated programs reaching a million+ lines of code.

Conservatively half of all programming realms, the object-oriented approach is important.