SyntaxFX is now located at abowhill.github.io.
Please update your bookmarks, and thanks for reading!
syntax FX
Monday, January 25, 2016
Thursday, August 1, 2013
Safe, Secure, Portable Remote X-Windows in the Browser
It is indeed possible to be sitting at some remote location such as a hotel, coffeeshop, store, library or bus with only public, unencrypted internet wifi access and a low-powered windows netbook and still securely access a complete X-Windows desktop over the internet.
OK, this scenario is not quite as fast as being on the physical X-windows machine, especially when you fire up a web browser with a bunch of graphics, but it can be just as good as being there for almost everything else. If you think about it, this is just the kind thing X-Windows was originally designed for: remote access to a graphical desktop on a display server via a dumb terminal. It's really no different over the internet. These dumb terminals in the old days used to be implemented as hardware, but effectively any dumb Windows netbook can play that role.
Given you have access to a running Unix machine at home or work, even one without a monitor (known as a "headless" display) all you need are a couple pieces of software on the Windows laptop: Chrome and Putty, and a bit of configuration magic running on the Unix machine you are accessing.
This setup strategy gives you X-windows in your web browser via a free VNC Chrome plugin, tunneled through an SSH forwarded connection to a headless server behind a firewall running a VNC client, connected to X-windows running in a Virtual FrameBuffer server. There are a few bones to connect, and I will try to explain this.
The Virtual Frame Buffer server (Xvfb) is something that comes with the X-windows distribution in the form of a kind of virtual video driver component. It is similar to the mechanism used to serve up Windows Remote Desktop, in that it uses some code to read the Frame buffer that stores the Windwos Image, but it's Unix instead of Microsoft Windows. Basically the deal with Xvfb is that it's treated just another type of video card driver, like Nvidia or AMD. But instead of displaying to a video card, it "displays" everything live, to a space in memory. Additionally, no monitor or keyboard has to be connected. it's all redirected to run as a complete desktop in memory, as if it were connected to a real mouse, keyboard, video card and monitor. When you run X using this driver, it just sort of sits there in no man's land, running X-Windows as if it were running on the local machine.
To access the machine running the VNC server behind the firewall, you must first have SSH access to a Unix machine in the DMZ in front of the firewall. There are three options from here.
1. Setup the VNC server on the DMZ machine. This is probably a terrible choice, as it exposes both X Windows and VNC connections to the open world. Technically, it can be done but addresses should be bound to non-public IP address segments (e.g 192.168.X.X or 10.0.0.X) on your internal network. I would not trust this kind of setup, and thus won't go into it.
2. Setup VNC on a separate machine that can only be accessed on the internal network. This is a fine solution, but requires another machine, which in my mind is a waste and excessive. But a good choice if you have tons of computers to dedicate to tasks like this.
3. Setup VNC in a jail whose interface can only be accessed over private address space behind the firewall. This is the ideal and most economical solution, and can easily be done on FreeBSD. If you are smart, you will be using this operating system in your DMZ. (If you are very smart you would be probably be using OpenBSD for this purpose) In any case, setup a jail on your FreeBSD DMZ machine with a local address (192.168.X.X or the like), install x11/xorg (X-windows) andnet/tightvnc net/x11vnc ports. They are a bit of a long build, but worth it.
NOTE: After upgrading and preparing a plain old jail to test this out, I am backing off my recommendation to use the tightvnc port until I can manage a working configuration with it. Tightvnc worked in a non-jail environment, and has a few problems that I don't understand yet, so I've modified the instructions to work with the net/x11vnc port. I'll try to provide the net/tightvnc configuration sooner or later, because I do believe the performance is rated better. However, the net/x11vnc port does work inside a jail with some adjustments. I've highlighted the changes and additions I made to orange.
Once you've built a jail according to the manpage for jail(8), You'll probably want to start it with a command like this:
> jail -c -f simple_jail_config
where simple_jail_config is a file that looks something like this:
mount.devfs;
allow.raw_sockets;
allow.sysvipc;
host.hostname = testhostname;
ip4.addr = 192.168.0.49;
interface = rl0;
exec.start = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
}
On your VNC host (be it a jail or otherwise) type, as a non-root user:
> export DISPLAY=:1
> Xvfb :1 -screen 0 1024x768x16 &
> fluxbox -display :1 &
The second part of server setup is running VNC server as a companion process to X. VNC will take whatever desktop you are running (even a virtualized one like Xvfb) and will transmit it over the wire to a VNC client running on another machine.
> x11vnc -ncache 10 -display :1 -bg -nopw -listen 192.168.0.49 -xkb
That's all you need to do on the server. Three command lines.
Some explanation is due. VNC is run as a co-process to X because of security problems with serving X connections directly over the wire, even when X is protected by SSH forwarding.
One of the things about X (a good thing and a bad thing) is that you can redirect windows and keystrokes to other X servers on other machines. This should never be done in an insecure environment, because it exposes all kinds of details that can compromise your privacy. So you will always want to use SSH to cryptographically secure the connection.
There is a feature in SSH that allows global forwarding of incoming connections. However, on good UNIX distributions like FreeBSD this is turned off by default. Due to security problems that can crop up via the MIT Magic Cookie, complexity of the X protocol, and callbacks to client X connections, you should not use this feature to forward your X-Windows connections directly. Instead, use VNC as a proxy to X, which simplifies the security model greatly. With VNC there is only a single connection for SSH to tunnel, and it doesn't rely on wobbly trickery with SSH and the MIT Magic Cookies and callback connections to the client every time a new application is opened.
For more info, See this excerpt from SSH: Secure Shell Definitive Guide, O'Reilly. It's complicated. http://csce.uark.edu/~kal/info/private/ssh/ch09_03.htm
Other positive side-effects of pairing an X-windows connection with a VNC proxy is that you get slightly better performance over the wire, and compatibility with just about any client platform. In other words, you don't have to install and run a big, fat X-server on the remote client to get full remotely-served X desktop. Using the VNC scheme, small, low-powered machines like netbooks running Windows or some other OS can access X-Windows as easily as UNIX laptops running X-Windows.
So the strategic bottom line is that you must pair VNC with X-Windows on the server. VNC will tap into the virtual frame buffer of the X server, and act as a kind of proxy between the X server and remote VNC client. Since the protocol is elegant, it can be safely tunneled through SSH, which can port-forward the VNC connection to a local port on the client machine, which is one of the magical, almost quantum features of SSH - the ability to transport the interface of one machine behind a firewall to the local interface of some other machine in front of a firewall. Finally, the VNC client on the remote machine's web browser has only to connect to one of its own ports to access the whole thing.
As far as the client setup is concerned, it's quite simple.
First, install the Google Chrome VNC plugin. It's free, from the Chrome web store.
Secondly, install a copy of Putty, the free and venerable ssh client. If you are running Linux or OS/X on a laptop, you probably already have ssh, so you can dispense with Putty.
For Windows users, Putty should be configured with a profile to access the host UNIX machine .
In other words, in the Configuration menu, in the Session option, give the IP address or hostname of the DMZ host. Port should be already set at 22.
Next, you'll need to scroll-down the configuration tree to the section Connection//SSH//Tunnels and set the source port to something like 3333.
Next, set Destination to:
[host]:[port]
Where [host] is the hostname or IP address of the machine behind the DMZ firewall, running the vnc server, and port being the port number that VNC server is attached to. Mine is usually port 5902, but you can find this for your system by getting on the VNC host and looking at the logfile for vncserver after it has been started. It's in the hidden .vnc directory in your home directory. There will be an entry in the logfile saying something like: Listening for VNC connections on TCP port 5902
Once you've entered these two numbers in Putty, click ADD
Then go back to the Session menu selection option and SAVE your profile. From your client laptop when you make a connection to the server using putty, you'll be prompted for your username and password, and dropped into a shell. Leave the Putty session open, because it will make the VNC server connection available to you on localhost port 3333.
At this point, you can open Chrome, open the extension, and type in localhost:3333 to make the connection. You may be asked for a password. That's all there is to it. Your X-Windows desktop should display in a detached window.
OK, this scenario is not quite as fast as being on the physical X-windows machine, especially when you fire up a web browser with a bunch of graphics, but it can be just as good as being there for almost everything else. If you think about it, this is just the kind thing X-Windows was originally designed for: remote access to a graphical desktop on a display server via a dumb terminal. It's really no different over the internet. These dumb terminals in the old days used to be implemented as hardware, but effectively any dumb Windows netbook can play that role.
Given you have access to a running Unix machine at home or work, even one without a monitor (known as a "headless" display) all you need are a couple pieces of software on the Windows laptop: Chrome and Putty, and a bit of configuration magic running on the Unix machine you are accessing.
This setup strategy gives you X-windows in your web browser via a free VNC Chrome plugin, tunneled through an SSH forwarded connection to a headless server behind a firewall running a VNC client, connected to X-windows running in a Virtual FrameBuffer server. There are a few bones to connect, and I will try to explain this.
The Virtual Frame Buffer server (Xvfb) is something that comes with the X-windows distribution in the form of a kind of virtual video driver component. It is similar to the mechanism used to serve up Windows Remote Desktop, in that it uses some code to read the Frame buffer that stores the Windwos Image, but it's Unix instead of Microsoft Windows. Basically the deal with Xvfb is that it's treated just another type of video card driver, like Nvidia or AMD. But instead of displaying to a video card, it "displays" everything live, to a space in memory. Additionally, no monitor or keyboard has to be connected. it's all redirected to run as a complete desktop in memory, as if it were connected to a real mouse, keyboard, video card and monitor. When you run X using this driver, it just sort of sits there in no man's land, running X-Windows as if it were running on the local machine.
To access the machine running the VNC server behind the firewall, you must first have SSH access to a Unix machine in the DMZ in front of the firewall. There are three options from here.
1. Setup the VNC server on the DMZ machine. This is probably a terrible choice, as it exposes both X Windows and VNC connections to the open world. Technically, it can be done but addresses should be bound to non-public IP address segments (e.g 192.168.X.X or 10.0.0.X) on your internal network. I would not trust this kind of setup, and thus won't go into it.
2. Setup VNC on a separate machine that can only be accessed on the internal network. This is a fine solution, but requires another machine, which in my mind is a waste and excessive. But a good choice if you have tons of computers to dedicate to tasks like this.
3. Setup VNC in a jail whose interface can only be accessed over private address space behind the firewall. This is the ideal and most economical solution, and can easily be done on FreeBSD. If you are smart, you will be using this operating system in your DMZ. (If you are very smart you would be probably be using OpenBSD for this purpose) In any case, setup a jail on your FreeBSD DMZ machine with a local address (192.168.X.X or the like), install x11/xorg (X-windows) and
The idea here is if X is running inside a FreeBSD jail on the host machine placed in the DMZ, and the jail is only accessible to protected network address space (such as 192.168.x.x), only ssh tunneling can expose the machine from the outside.
NOTE: After upgrading and preparing a plain old jail to test this out, I am backing off my recommendation to use the tightvnc port until I can manage a working configuration with it. Tightvnc worked in a non-jail environment, and has a few problems that I don't understand yet, so I've modified the instructions to work with the net/x11vnc port. I'll try to provide the net/tightvnc configuration sooner or later, because I do believe the performance is rated better. However, the net/x11vnc port does work inside a jail with some adjustments. I've highlighted the changes and additions I made to orange.
Once you've built a jail according to the manpage for jail(8), You'll probably want to start it with a command like this:
> jail -c -f simple_jail_config
where simple_jail_config is a file that looks something like this:
testjail {
path=/usr/home/username/192.168.0.49;mount.devfs;
allow.raw_sockets;
allow.sysvipc;
host.hostname = testhostname;
ip4.addr = 192.168.0.49;
interface = rl0;
exec.start = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
}
That's the first part of the setup. It can be done on a headless UNIX server. What follows is a set of commands to fire up the server-side programs. (The following was a setup modified from technique shown at http://en.wikipedia.org/wiki/Xvfb).
On your VNC host (be it a jail or otherwise) type, as a non-root user:
> export DISPLAY=:1
> Xvfb :1 -screen 0 1024x768x16 &
> fluxbox -display :1 &
The second part of server setup is running VNC server as a companion process to X. VNC will take whatever desktop you are running (even a virtualized one like Xvfb) and will transmit it over the wire to a VNC client running on another machine.
> x11vnc -ncache 10 -display :1 -bg -nopw -listen 192.168.0.49 -xkb
That's all you need to do on the server. Three command lines.
Some explanation is due. VNC is run as a co-process to X because of security problems with serving X connections directly over the wire, even when X is protected by SSH forwarding.
One of the things about X (a good thing and a bad thing) is that you can redirect windows and keystrokes to other X servers on other machines. This should never be done in an insecure environment, because it exposes all kinds of details that can compromise your privacy. So you will always want to use SSH to cryptographically secure the connection.
There is a feature in SSH that allows global forwarding of incoming connections. However, on good UNIX distributions like FreeBSD this is turned off by default. Due to security problems that can crop up via the MIT Magic Cookie, complexity of the X protocol, and callbacks to client X connections, you should not use this feature to forward your X-Windows connections directly. Instead, use VNC as a proxy to X, which simplifies the security model greatly. With VNC there is only a single connection for SSH to tunnel, and it doesn't rely on wobbly trickery with SSH and the MIT Magic Cookies and callback connections to the client every time a new application is opened.
For more info, See this excerpt from SSH: Secure Shell Definitive Guide, O'Reilly. It's complicated. http://csce.uark.edu/~kal/info/private/ssh/ch09_03.htm
Other positive side-effects of pairing an X-windows connection with a VNC proxy is that you get slightly better performance over the wire, and compatibility with just about any client platform. In other words, you don't have to install and run a big, fat X-server on the remote client to get full remotely-served X desktop. Using the VNC scheme, small, low-powered machines like netbooks running Windows or some other OS can access X-Windows as easily as UNIX laptops running X-Windows.
So the strategic bottom line is that you must pair VNC with X-Windows on the server. VNC will tap into the virtual frame buffer of the X server, and act as a kind of proxy between the X server and remote VNC client. Since the protocol is elegant, it can be safely tunneled through SSH, which can port-forward the VNC connection to a local port on the client machine, which is one of the magical, almost quantum features of SSH - the ability to transport the interface of one machine behind a firewall to the local interface of some other machine in front of a firewall. Finally, the VNC client on the remote machine's web browser has only to connect to one of its own ports to access the whole thing.
As far as the client setup is concerned, it's quite simple.
First, install the Google Chrome VNC plugin. It's free, from the Chrome web store.
Secondly, install a copy of Putty, the free and venerable ssh client. If you are running Linux or OS/X on a laptop, you probably already have ssh, so you can dispense with Putty.
For Windows users, Putty should be configured with a profile to access the host UNIX machine .
In other words, in the Configuration menu, in the Session option, give the IP address or hostname of the DMZ host. Port should be already set at 22.
Next, you'll need to scroll-down the configuration tree to the section Connection//SSH//Tunnels and set the source port to something like 3333.
Next, set Destination to:
[host]:[port]
Where [host] is the hostname or IP address of the machine behind the DMZ firewall, running the vnc server, and port being the port number that VNC server is attached to. Mine is usually port 5902, but you can find this for your system by getting on the VNC host and looking at the logfile for vncserver after it has been started. It's in the hidden .vnc directory in your home directory. There will be an entry in the logfile saying something like: Listening for VNC connections on TCP port 5902
Once you've entered these two numbers in Putty, click ADD
Then go back to the Session menu selection option and SAVE your profile. From your client laptop when you make a connection to the server using putty, you'll be prompted for your username and password, and dropped into a shell. Leave the Putty session open, because it will make the VNC server connection available to you on localhost port 3333.
At this point, you can open Chrome, open the extension, and type in localhost:3333 to make the connection. You may be asked for a password. That's all there is to it. Your X-Windows desktop should display in a detached window.
Thursday, July 11, 2013
Thermal storage and Active Resource Management
I recently added a couple of new games to my system recently, each about 25 Gigabytes in size, and I noticed how poorly I planned to use my own computing resources properly. I installed the game I never use on my best drive, but not after installing the game I now often use on my worst.
Games can be very file intensive at times, for example when loading scenery as you play. If these scenery files reside on a slow or sluggish drive, it can have a terrible effect on your play experience. I also notice that I have several bulky games that I do not use very often. Why should these occupy my best resources?
Games aren't getting any smaller. With each successive year, the size of the software packages is increasing. Here is a list of a few games I have installed, and their respective sizes:
I can remember when games took less than a few floppies to install, and the average disk size was in Megabytes, not Gigabytes. Games have gotten larger for sure. But computing resources have become more varied and capable. Back in the day of Duke Nukem and Wolfenstein, I could install everything from floppies or CD to a single disk and be happy. But a single disk is all I had. There were no DVDs, SSDs or Pen Drives.
Today, we have more storage choices on the average computer. For example, my current system has several kinds of drives (theoretically) available to it:
Games can be very file intensive at times, for example when loading scenery as you play. If these scenery files reside on a slow or sluggish drive, it can have a terrible effect on your play experience. I also notice that I have several bulky games that I do not use very often. Why should these occupy my best resources?
Games aren't getting any smaller. With each successive year, the size of the software packages is increasing. Here is a list of a few games I have installed, and their respective sizes:
- ARMA 2 / Operation Arrowhead: 24 G
- Guild Wars 2: 22 G
- ARMA 3: 7 G
- Skyrim: 10 G
- Age of Conan: 25 G
I can remember when games took less than a few floppies to install, and the average disk size was in Megabytes, not Gigabytes. Games have gotten larger for sure. But computing resources have become more varied and capable. Back in the day of Duke Nukem and Wolfenstein, I could install everything from floppies or CD to a single disk and be happy. But a single disk is all I had. There were no DVDs, SSDs or Pen Drives.
Today, we have more storage choices on the average computer. For example, my current system has several kinds of drives (theoretically) available to it:
- A very fast but somewhat small SSD (128 G)
- A quite small but relatively fast pen drive (32 G)
- A fairly large but slow mechanical hard drive (320 G)
- A very large but very slow DVD that is capable of read/write/erase (8 G)
- The cloud. Very slow, not that large and not that secure. (10 G)
Ideally, at any given time, I would like to have the games I play the most on the fastest drive, and the ones I play the least on the slowest. For me, this would mean a game like Skyrim (which I don't play often) to be packed away on a slow DVD, and something like ARMA (which I play a lot right now) to be available on the very fast SSD.
But there are complications. I like to call this problem one of Active Resource Management, and it is really the domain of the operating system.
- My tastes vary! One week I may be playing ARMA2, but the next week I might be really into playing Age of Conan.
- I can't fit too many games on the SSD because it's too small. The operating system takes up 25% of the drive and needs breathing space.
- Copying huge amounts of data from one disk to another is error-prone and can break a lot of things like shortcuts and deinstallation scripts. Plus it takes a lot of time, and hogs the computer's resources. It's just terribly inconvenient.
- Removable disks are not always there. I don't necessarily have a rewritable DVD in the player all the time, nor do I always have a Pen Drive in the USB port.
- Not all the drives are predictably the same size and speed. My USB pen drive could be large and fast or small and slower, depending on which stick I install. Likewise, DVDs can vary in capacity, depending on what disc you install.
The problem of Active Resource Management on the computer can be solved by taking a Thermal Storage approach. Files that are infrequently accessed would be demoted to slow, high capacity, low-availability devices (Cold Store); while frequently-accessed files would be promoted to fast, high-availability devices (Hot Store).
Currently, even the most advanced operating systems do not perform this complex function. So we, as human beings, we are forced to do the work of the computer manually. This is just the state of affairs. A lack of innovation in the industry.
But imagine for a minute on what such a system would be, and how it might behave:
- The OS would have to have a monitoring system that noticed which files you opened and closed. It would have to keep track of this in a table.
- The OS would have to know the capacity, performance characteristics and availability of all storage devices on the system. It would have to store that in a table.
- The OS would have to "adopt" and "melt" devices together as they were plugged in -- to appear dynamically as one storage unit to the user, even though 4 or 5 different devices may be currently used.
- The OS would have to dynamically and thermally schedule files for promotion or demotion to hot / warm/ cool / cold storage based on heuristics and ranking of what it observed by frequency and duration of access. Conversely, it would have to evaluate the capacity/reliability/availability of storage devices, and judge which would be the best place to store something. Content of removable devices would have to be mirrored onto another non-removable device, depending on policy the user chooses.
- It would have to do thermal storage for the user in a hands-off manner. You might be able to set policies and rules, but operationally the whole thing is done for you transparently.
For example, games you play frequently would get promoted to fast, available storage and would deliver what you'd expect. You wouldn't have to manually manage this when your priorities changed. No more copying huge directory trees from one device to another depending on your anticipated usage.
Monday, July 1, 2013
Using a custom face in ARMA III
I've been playing a lot of ARMA 3 BETA recently, a military simulator that has an incredible amount of customizability to create your own game and simulation. ARMA (more specifically ARMA 2: Combined Operations) is the software platform that hosts the famous zombie-survival multiplayer mod (3rd party modification script) game called Dayz.
If you haven't played ARMA 2 and Dayz, I recommend you do. The game is very affordable and available on Steam. If you don't have ARMA 3, you should get it too. Although it's still in BETA, it's also quite affordable and you can keep in sync with development builds.
Anyway, I've started to modify various aspects of ARMA 3 myself, and saw this somewhat outdated tutorial on fan site OFPEC on how to change your character's face. I thought I would condense the instructions a little to make it easier to use in ARMA 3.
The requires an installed copy of ARMA3, preferably from Steam.
Typically, after playing the simulator a few times, you'll have been made to create a character with a name. You would have been given the opprtunity to change it's face and voice as well as select glasses, etc.
For the puposes of this setup, we will go with something off the internet which can serve as a template.
NOTE: If you are using ARMA 2, go to the OFPEC Face Library for faces.
For ARMA 3, you can use this face instead as a starter.
Save the image as face.jpg in:
"C:\Users\<windows_name>\Documents\Arma 3 - Other Profiles\<arma_character_name>\face.jpg"
Where <windows_name> is your windows login name
where <arma_character_name> is the character you wish to apply the face to.
Then open up ARMA 3 and select OPTIONS, then PROFILE from the MAIN MENU
The select EDIT then CUSTOM FACE, and APPLY, then OK
That's it!
If you haven't played ARMA 2 and Dayz, I recommend you do. The game is very affordable and available on Steam. If you don't have ARMA 3, you should get it too. Although it's still in BETA, it's also quite affordable and you can keep in sync with development builds.
Anyway, I've started to modify various aspects of ARMA 3 myself, and saw this somewhat outdated tutorial on fan site OFPEC on how to change your character's face. I thought I would condense the instructions a little to make it easier to use in ARMA 3.
The requires an installed copy of ARMA3, preferably from Steam.
Typically, after playing the simulator a few times, you'll have been made to create a character with a name. You would have been given the opprtunity to change it's face and voice as well as select glasses, etc.
For the puposes of this setup, we will go with something off the internet which can serve as a template.
NOTE: If you are using ARMA 2, go to the OFPEC Face Library for faces.
For ARMA 3, you can use this face instead as a starter.
Save the image as face.jpg in:
"C:\Users\<windows_name>\Documents\Arma 3 - Other Profiles\<arma_character_name>\face.jpg"
Where <windows_name> is your windows login name
where <arma_character_name> is the character you wish to apply the face to.
Then open up ARMA 3 and select OPTIONS, then PROFILE from the MAIN MENU
The select EDIT then CUSTOM FACE, and APPLY, then OK
That's it!
Thursday, May 16, 2013
Nightmare scenario
Here is a scary thought. Suppose the Humane Society one day came along and reclaimed your favorite adopted pet under some obscure law allowing it to do so. Then in some lab, the Society biologically de-engineers your pet into its primitive, slathering, ancestral form from epochs ago. If you had a kitty, it would be genetically devolved into a sabre-toothed tiger. If you once had a doggie, it gets de-engineered into a small bear, for example. Your pets are still your pets underneath, it's just that they've taken on a new, primitive form.
Then one day, the Society sends you a letter charging you with an ultimatum that they will destroy your pet unless you enter into an agreement with them to buy it back and pay monthly security fees to prevent it from eating you for dinner.
What would your rights be to protect something that was given to you for free, to which you invested years of loving care and development, which was reposessed by the "charitable agency", turned into a monster and held hostage for cash? Wouldn't you feel devastated, not only because you were being exploited and being lied-to, but because a thing you raised and came to love was now being held hostage by a corrupt organization?
OK, so this is just an analogy. The Humane Society will never swap it's identity for a Genetic Regression Kidnapping and Hostage-taking organization. It would be a disaster to humanity and civilized pets everywhere.
But a parallel type of situation isn't so improbable with large cloud providers in the software business today.
Consider this: There has been a Big Trend in the past few years to cloudify desktop applications. In other words, everything you once installed and ran from your desktop will eventually be runnable from inside your web browser using open-source and closed-source technologies. This has been perhaps the most important evolutionary change to the software industry in the past 10 years.
Call it "Web 2.0" or "The Cloud" as you like, but it's one of those mega-changes that comes around every 10 or 20 years that forces everyone to change the way they do things. Back in the 90s, the Big Trend was networking, the emergence Internet and e-mail. The Big Trend in the 90s forced every company to move their paper-based accounting, billing, filing, tracking and service systems from PAPER to SOFTWARE. Thousands of programmers and engineers were employed as a result of this shift. Just about every business moved from 3-part NCR forms to desktop applications which did the same thing. University textbooks taught aspiring programmers software development and process methodologies to transform paper systems into software systems.
Now we have a similar transformation taking place. Business systems are moving from the desktop to the cloud. Newly-invented systems already are developed as cloud applications. Companies like Google are the proverbial Humane Societies that provide free business applications for home use that were once pay-for items made by Microsoft. Google even has a hardware platform to support Cloud-only apps called Chromebooks.
Microsoft, following suit also has moved substantial resources into Cloud development. Although their bread and butter is old-fashioned installable operating systems, their Office products are starting to move Cloudwards, with Office365, and their platforms are integrating with Cloud-served builtin applications. Amazon too, is heavily invested in this area when it comes to retail. Their web services are their bread and butter, the web IS their storefront. There are many more companies that are re-orienting away from installable software to cloud-based software and services. It's a slow-moving revolution.
But what position does this place Consumers in? Well, for one. You don't possess the software. It's not yours. Even if you die, your relatives can't go onto your computer to retrieve your stored music or documents. Because it's the Cloud, it means data is stored at the company who provides the service. And possession is 9/10ths of the law, so whoever HAS the data OWNS the data.
Secondly, nothing is preventing cloud providers from evolving (or more specifically devolving) cloud applications (your proverbial pet cats and dogs) back into installable applications that only run on their proprietary client software platforms.
We are already seeing some of this with the de-cloudification of some Google applications like Gmail, maps, and search. In the offing are Android-only applications, and the closing-off (in some cases a forcible steering-away to consumers) of smartphone based access to the web applications through the browser.
Right now, these apps may be offered OPTIONALLY to people, but who is to say that one day Google will not shut down the web-based variants and require people to install their branded software to access them?
It's possible you as a consumer can have your data so overly-invested in a cloud location, that the corporations controlling the means of delivering that data can change the delivery method to exclude general web access to it at some point. This is the creation of "Walled Gardens" in which you can only use a company's client software to access your personal data. Using our analogy, this is akin to selling you security services after they devolve your pet into a slathering, primitive monster.
Sure, RIGHT NOW it makes things more convenient for the big cloud providers to not have to maintain 200 separate software applications that don't inter-operate written in 10 different languages, or distribute millions of security patches to these apps, or stamp millions of CDs to be distributed to retail outlets.
But the installable apps they are promoting nowadays are not self-contained. They rely on the same cloud infrastructure to access data, only they are closed-source, and commit you to a certain hardware and operating system platform in order to use them. These platforms often have fees associated with them. This is particularly true with phones, IPhones, Win phones and Android phones.
This style of proprietary app devolution is leaking upwards from cell phones to tablets, to netbooks to desktop. Will we soon be faced with no choice but to install a program to access Gmail or Office365 or AmazonFresh?
This is an interesting question. And since this post is getting long, I will punt a simple idea: The creation of a consumer-protective ratings organization. This organization could continuously evaluate the Cloud-openness and accessibility of some of these online products and give a level-based rating of some kind to each product every few weeks, based on criteria applied to all similar products. A product's history could be used to indicate risk to the consumer.
The higher the rating and level, the more desirable the cloud service and applications would be to the consumer, as ratings would be geared toward metrics on product and company attributes that indicate openness and stability.
For example, if Gimmesoft corporation decides to offer a free office suite online that works in all web browsers on 3 types of desktops, 3 types of phones and the telephone, and no special browser plugins are used, and no signs the company is developing replacement applets, it gets a rating as a level 6 provider.
However, if, 8 months later it kills off it's webmail application and introduces in-house written software applications that only work on Greedcorp's software products, it's level gets reduced and published.
With the kind of control web applications give corporations, something is due to the public: an independent, international consumer protection committee and monitoring agency.
There is no reason it would be unfair to corporations who really want to advance humankind by providing free and open software products.
Yet, the point would be to protect consumers from being swindled out of their personal data, papers and effects.
Then one day, the Society sends you a letter charging you with an ultimatum that they will destroy your pet unless you enter into an agreement with them to buy it back and pay monthly security fees to prevent it from eating you for dinner.
What would your rights be to protect something that was given to you for free, to which you invested years of loving care and development, which was reposessed by the "charitable agency", turned into a monster and held hostage for cash? Wouldn't you feel devastated, not only because you were being exploited and being lied-to, but because a thing you raised and came to love was now being held hostage by a corrupt organization?
OK, so this is just an analogy. The Humane Society will never swap it's identity for a Genetic Regression Kidnapping and Hostage-taking organization. It would be a disaster to humanity and civilized pets everywhere.
But a parallel type of situation isn't so improbable with large cloud providers in the software business today.
Consider this: There has been a Big Trend in the past few years to cloudify desktop applications. In other words, everything you once installed and ran from your desktop will eventually be runnable from inside your web browser using open-source and closed-source technologies. This has been perhaps the most important evolutionary change to the software industry in the past 10 years.
Call it "Web 2.0" or "The Cloud" as you like, but it's one of those mega-changes that comes around every 10 or 20 years that forces everyone to change the way they do things. Back in the 90s, the Big Trend was networking, the emergence Internet and e-mail. The Big Trend in the 90s forced every company to move their paper-based accounting, billing, filing, tracking and service systems from PAPER to SOFTWARE. Thousands of programmers and engineers were employed as a result of this shift. Just about every business moved from 3-part NCR forms to desktop applications which did the same thing. University textbooks taught aspiring programmers software development and process methodologies to transform paper systems into software systems.
Now we have a similar transformation taking place. Business systems are moving from the desktop to the cloud. Newly-invented systems already are developed as cloud applications. Companies like Google are the proverbial Humane Societies that provide free business applications for home use that were once pay-for items made by Microsoft. Google even has a hardware platform to support Cloud-only apps called Chromebooks.
Microsoft, following suit also has moved substantial resources into Cloud development. Although their bread and butter is old-fashioned installable operating systems, their Office products are starting to move Cloudwards, with Office365, and their platforms are integrating with Cloud-served builtin applications. Amazon too, is heavily invested in this area when it comes to retail. Their web services are their bread and butter, the web IS their storefront. There are many more companies that are re-orienting away from installable software to cloud-based software and services. It's a slow-moving revolution.
But what position does this place Consumers in? Well, for one. You don't possess the software. It's not yours. Even if you die, your relatives can't go onto your computer to retrieve your stored music or documents. Because it's the Cloud, it means data is stored at the company who provides the service. And possession is 9/10ths of the law, so whoever HAS the data OWNS the data.
Secondly, nothing is preventing cloud providers from evolving (or more specifically devolving) cloud applications (your proverbial pet cats and dogs) back into installable applications that only run on their proprietary client software platforms.
We are already seeing some of this with the de-cloudification of some Google applications like Gmail, maps, and search. In the offing are Android-only applications, and the closing-off (in some cases a forcible steering-away to consumers) of smartphone based access to the web applications through the browser.
Right now, these apps may be offered OPTIONALLY to people, but who is to say that one day Google will not shut down the web-based variants and require people to install their branded software to access them?
It's possible you as a consumer can have your data so overly-invested in a cloud location, that the corporations controlling the means of delivering that data can change the delivery method to exclude general web access to it at some point. This is the creation of "Walled Gardens" in which you can only use a company's client software to access your personal data. Using our analogy, this is akin to selling you security services after they devolve your pet into a slathering, primitive monster.
Sure, RIGHT NOW it makes things more convenient for the big cloud providers to not have to maintain 200 separate software applications that don't inter-operate written in 10 different languages, or distribute millions of security patches to these apps, or stamp millions of CDs to be distributed to retail outlets.
But the installable apps they are promoting nowadays are not self-contained. They rely on the same cloud infrastructure to access data, only they are closed-source, and commit you to a certain hardware and operating system platform in order to use them. These platforms often have fees associated with them. This is particularly true with phones, IPhones, Win phones and Android phones.
This style of proprietary app devolution is leaking upwards from cell phones to tablets, to netbooks to desktop. Will we soon be faced with no choice but to install a program to access Gmail or Office365 or AmazonFresh?
This is an interesting question. And since this post is getting long, I will punt a simple idea: The creation of a consumer-protective ratings organization. This organization could continuously evaluate the Cloud-openness and accessibility of some of these online products and give a level-based rating of some kind to each product every few weeks, based on criteria applied to all similar products. A product's history could be used to indicate risk to the consumer.
The higher the rating and level, the more desirable the cloud service and applications would be to the consumer, as ratings would be geared toward metrics on product and company attributes that indicate openness and stability.
For example, if Gimmesoft corporation decides to offer a free office suite online that works in all web browsers on 3 types of desktops, 3 types of phones and the telephone, and no special browser plugins are used, and no signs the company is developing replacement applets, it gets a rating as a level 6 provider.
However, if, 8 months later it kills off it's webmail application and introduces in-house written software applications that only work on Greedcorp's software products, it's level gets reduced and published.
With the kind of control web applications give corporations, something is due to the public: an independent, international consumer protection committee and monitoring agency.
There is no reason it would be unfair to corporations who really want to advance humankind by providing free and open software products.
Yet, the point would be to protect consumers from being swindled out of their personal data, papers and effects.
SSH port forwarding ala crosh
On a Google Chromebook you can access a UNIX-like terminal window by typing CTRL-T from the Chrome browser. This feature makes a Chromebook much more viable to use for technical people.
Without it, you'd have to setup your own HTTP/SSL Tunneling/Comet server at home to even come close to embedding a terminal in a web browser (a complicated setup project I have done before). So hat's off to whoever had the wherewithal to put that feature in.
The terminal's bound shell - called "crosh" - is however limited in the Chromebook's user mode for security purposes, so you won't be able to do your typical UNIX-y stuff on your Chromebook directly; but a copy of SSH has been provided to securely connect to a real UNIX-like machine over the internet.
One trick you can do is to use port-forwarding with this shell, enabling you to plug into a system like a protected web server deep behind a firewall, and have it deliver pages to your Chromebook as if the webserver were running on the Chromebook itself.
So given the following steps, you should be able to do this:
1. You are at some location outside the network protecting the webserver you want to acccess.
2. You are in the crosh shell on a Chromebook.
3. You can publicly access an SSH host computer connected to the firewalled network.
4. The SSH host has SSH access to the internal webserver sitting behind the firewall. The webserver is serving-up pages on that machine's local port 3000
In crosh, just run:
crosh> ssh
ssh> user (your username)
ssh> host (IP address or name of publicly-accessible SSH server)
ssh> forward 8000:(internal webserver IP address):3000
ssh> connect
(enter credentials)
Done! Leave everything alone.
Open a Chrome browser tab, and point it to:
http://localhost:8000
Bang! Your internal, firewall protected website is being served up to your local chromebook as if it were running on the chromebook itself.
It's a freaking miracle.
Without it, you'd have to setup your own HTTP/SSL Tunneling/Comet server at home to even come close to embedding a terminal in a web browser (a complicated setup project I have done before). So hat's off to whoever had the wherewithal to put that feature in.
The terminal's bound shell - called "crosh" - is however limited in the Chromebook's user mode for security purposes, so you won't be able to do your typical UNIX-y stuff on your Chromebook directly; but a copy of SSH has been provided to securely connect to a real UNIX-like machine over the internet.
One trick you can do is to use port-forwarding with this shell, enabling you to plug into a system like a protected web server deep behind a firewall, and have it deliver pages to your Chromebook as if the webserver were running on the Chromebook itself.
So given the following steps, you should be able to do this:
1. You are at some location outside the network protecting the webserver you want to acccess.
2. You are in the crosh shell on a Chromebook.
3. You can publicly access an SSH host computer connected to the firewalled network.
4. The SSH host has SSH access to the internal webserver sitting behind the firewall. The webserver is serving-up pages on that machine's local port 3000
In crosh, just run:
crosh> ssh
ssh> user (your username)
ssh> host (IP address or name of publicly-accessible SSH server)
ssh> forward 8000:(internal webserver IP address):3000
ssh> connect
(enter credentials)
Done! Leave everything alone.
Open a Chrome browser tab, and point it to:
http://localhost:8000
Bang! Your internal, firewall protected website is being served up to your local chromebook as if it were running on the chromebook itself.
It's a freaking miracle.
Saturday, March 23, 2013
Shelling it again
Sadly again, I find myself using the punishing /bin/sh to script tasks in FreeBSD. And again, I am reminded how painful and time-consuming it can be to do some of the simplest things.
One of the first things you should realize is that sh is not Bash. It is Bash featureless ancestor - a kind of pygmy caveman. It is smaller, faster, and somewhat harder to work with. It has nearly meaningless error-messages.
One thing I am absorbing about the language this shell speaks is that variables take several forms.
1. Left-hand side name. Ex: dinosaur="Dino"
In this form, a variable is recieving a value.
2. Right-hand side name. Ex: animal=$dinosaur
In this form, $dinosaur is a variable is being used for something. Here, assignment to another variable.
3. Formal name. Ex: ${dinosaur}
The same as $dinosaur, but in this form you can change the entire tone and context of the name, like _$(dinosaur)!!_, which will translate to _dino!!_.
4. Verbing-name. Ex: animal=$(dinosaur)
Althoug it looks like it, this is not a variable, but a command to do something in a shell. In this case, we are trying to call a program name "dinosaur" in the operating system. If dinosaur exists and replies, $animal will hold the response.
5. Deep verbing-name. Ex: $($dinosaur)
Just like the verbing name, but specificlly calling the program "Dino" in the operating system.
6. Subshell verbing-name. Ex: `dinosaur` Here, we are calling a subshell to try to run the program "dinosaur". And so on. It's worth noting the shell has two parts: 1. The enviroment storage area 2. The shell storage area The environment contains all the variables that have been exported to semi-protected storage, and will persist throughout the shell's various operating modes and subshells. However, the shell's unprotected variable storage area can be assigned-to without using the export command. This is important in shell programming, because when you assign some value to a variable, names and values are stored in this unprotected environment. Unlike programming languages, the shell has no other way to store names and values. When you run a script from the prompt like this: > scriptname.sh It runs in a subshell, and all the unprotected storage it used is destroyed when it returns, but the semi-protected environment is transferred to the subshell. When you run a script like this: > source scriptname.sh it is not run in a subshell, and all the variables and values the script created are retained in the unprotected environment. This can have ramifications between subsequent runs if you don't reset your variables in your script. It is also true that all variables are global by default in a script. So although these variables are often destroyed after the script has completed running in a subshell, you can get into trouble by assuming names are localized inside the body of the script itself. Shell scripts do have callable functions, but they can't return values other than exit codes. If you want return values, you have to assign a global variable the role of holding a return value from all functions, and clear that variable at the beginning of every function. This is called defensive programming. So the deal is, you can have functions effectively return two values: the status code (an integer, usually indicating failure or success) and a string containing some result data. Below, is a function to take a pathname as a string and and clean out duplicate slashes from it, returning it as a string. It also returns a status code, on whether the clean path was found (0) or not found (1). If the function was handed an empty input string, it returns (2) and if some other error occurs, it returns that too. The test driver for the function basically calls the function and parses the results. The input to the test driver are various unclean paths. I think it shows functions are viable in sh scripts, which can serve as an advantage to programming these things. But getting the syntax and behavior correct is actually quite hard to. Using Perl or Ruby, something like this can be done in minutes. Doing it in shell can take a lot longer, especially because it's a blind man's walk of trial and error, especially if you're used to more sophisticated languages where true is true and false is false; not having to call external programs for regular expressions, or are unaccustomed to built-in variables disappearing if you don't capture them right away.
Resources
Learning the Bash Shell (Newham & Rosenblatt, O'Reilly) Easy Shell Scripting (Cherian, Linux Gazette) returning value from called function in shell script (Stack Overflow) Return a single value from a shell script function (Stack Overflow)Thursday, March 21, 2013
Upgrading a -STABLE ZFS system with clang
One of the technologies I was interested in testing along with ZFS was the LLVM-based clang compiler suite on FreeBSD, which is currently under integration and slated officially to replace the gcc/g++ compiler suite in FreeBSD 10. Right now, clang is in the 9-STABLE base system alongside gcc/g++.
Clang has a lot of virtues compared to gcc/g++ among which are:
- Better, more informative error messages
- Better compliance for c++
- Better support for IDEs and diagnostic tools
- Uses less memory
- Has JIT support
- Isn't monolithic, as is gcc/g++
- BSD license, so commercially viable.
This is one of the things I love about FreeBSD. It's always kept as commercially viable as possible with the BSD license. There is no reason to use open source software at work or home that has restrictive licensing. Adoption of clang is just another good reason to use FreeBSD.
Instructions for building the OS with clang are located at: https://wiki.freebsd.org/BuildingFreeBSDWithClang
This page has some useful tips, and there are a couple worth mentioning here. If you want to buildworld and your kernel with clang, you have to enable the clang suite in /etc/make.conf to replace gcc/g++:
CC=clang
CXX=clang++
CPP=clang-cpp
If you do a: # make buildworld kernel with the default command line options, it should work just fine. Nothing further to do.
However, if you're feeling cautious and want to test the kernel itself first, run the following from /usr/src:
make kernel KERNCONF=GENERIC INSTKERNNAME=clang
The line above builds a GENERIC kernel called clang, but places it into a separate directory along with its modules in /boot. This way, you leave your previous gcc kernel in place, yet can test the clang kernel easily with some intervention at the boot loader.
If you go this route, when you reboot, drop to the bootloader (option 2) and enter the following to change the module path to boot into the clang kernel:
set module_path=/boot/clang
boot clang
If the kernel panics, there is no need to do so yourself. Just reboot into the old gcc /boot/kernel without intervention, and it will load by default. If you moved or destroyed the old kernel instead, and the option isn't available you can always opt to load /boot/kernel.old and its modules instead from the boot loader.
However, a clang boot will likely work, and the system should boot and enable you to make kernel buildworld, installworld, etc. This worked fine for me, but on one machine there was a kernel panic bootstrapping the system due to (probably) module installation paths, which was fairly easy to address.
For some reason, the clang kernel was loading but clang modules were not. I decided to reinstall the kernel again, leaving off the KERNCONF=GENERIC build option:
make reinstallkernel
Which did the trick.
Once booted, a subsequent view of dmesg will show which compiler was used to build the kernel.
I will be experimenting with this ZFS clang build setup for a little while, and note some of the issues I come across in following posts, but so far, ZFS and clang kernel/OS are performing nicely on my little underpowered Toshiba NB 205 netbook!
Note: Rebuilding the kernel/OS worked fine on my server machine - an old Celeron 1G box.
Dotfile map for several shells
Here is a preliminary dotfile invocation map for several FreeBSD shells:
For example, for /bin/sh and ksh93, the local environment variable ENV will determine at execution-time whether the shell runs an additional dotfile on login: either .shrc or .kshrc. And I still may not have the rules down correctly. It's very easy to be wrong when trying to describe this behavior in simple terms.
My general impression is that this needs to be harnessed and organized.
I am contemplating testing a system of indirect links to shell dotfiles, using the following scheme:
1.) Place all real dotfiles from all shells in a ~/shell-dotfiles directory, with their names prefixed by "DOT" as in DOT.kshrc and DOT.profile.
2.) Creating a ~/.sane directory containing the following subdirectories:
Under each of those four subdirectories, have a subdirectory for each shell, containing links to files read in that particular mode.
So, for example, dotfiles executed in a Bash interactive-login mode would be linked (either hard or soft) under ~/.sane/both/bash, to their corresponding real dotfiles in ~/shell-dotfiles. So specifically, .profile (as one case) would have a link under ~/.sane/both/bash/DOT.profile to ~/shell-dotffiles/DOT.profile.
3. Lastly, links would be created from the dotfiles expected locations in $HOME to the corresponding link in ~/shell-dotfiles. So, for example $HOME/.profile would link to ~/shell-dotfiles/DOT.profile,.
So, there would always be one place to store the real files flatly (~/shell-dotfiles) and conspicuously so there is never a name conflict. Secondly, there is a a single place to store shell dotfiles specifically, used or unused. Good for spot backups. Also, if hard links are made, there is some referential integrity to the whole thing.
The purpose of the ~/.sane directory would be to have a place to edit a file based on the shell and particular login scenario, like interactive-only or login-only. It would be very clear when editing in the ~/.sane directory that you were editing for a particular context, and could be better informed about the expected set of consequences.
- /bin/sh
- /usr/local/bin/bash
- /usr/local/bin/ksh93
- /bin/csh
- /bin/tcsh.
For example, for /bin/sh and ksh93, the local environment variable ENV will determine at execution-time whether the shell runs an additional dotfile on login: either .shrc or .kshrc. And I still may not have the rules down correctly. It's very easy to be wrong when trying to describe this behavior in simple terms.
My general impression is that this needs to be harnessed and organized.
I am contemplating testing a system of indirect links to shell dotfiles, using the following scheme:
1.) Place all real dotfiles from all shells in a ~/shell-dotfiles directory, with their names prefixed by "DOT" as in DOT.kshrc and DOT.profile.
2.) Creating a ~/.sane directory containing the following subdirectories:
- interactive
- login
- neither
- both
- (non-interactive?, non-login?)
Under each of those four subdirectories, have a subdirectory for each shell, containing links to files read in that particular mode.
So, for example, dotfiles executed in a Bash interactive-login mode would be linked (either hard or soft) under ~/.sane/both/bash, to their corresponding real dotfiles in ~/shell-dotfiles. So specifically, .profile (as one case) would have a link under ~/.sane/both/bash/DOT.profile to ~/shell-dotffiles/DOT.profile.
3. Lastly, links would be created from the dotfiles expected locations in $HOME to the corresponding link in ~/shell-dotfiles. So, for example $HOME/.profile would link to ~/shell-dotfiles/DOT.profile,.
So, there would always be one place to store the real files flatly (~/shell-dotfiles) and conspicuously so there is never a name conflict. Secondly, there is a a single place to store shell dotfiles specifically, used or unused. Good for spot backups. Also, if hard links are made, there is some referential integrity to the whole thing.
The purpose of the ~/.sane directory would be to have a place to edit a file based on the shell and particular login scenario, like interactive-only or login-only. It would be very clear when editing in the ~/.sane directory that you were editing for a particular context, and could be better informed about the expected set of consequences.
Wednesday, March 20, 2013
ZFS: A truly Superior Filesystem
My wife just recently got a new HP Chromebook, causing her to rapidly abandon her 3-year old Toshiba NB 205 netbook. This gave me a new computer to experiment on, and of course I installed FreeBSD.
This is a very low-powered machine, with a ton of case-edge built-in peripherals. It has 2G ram, and an internal disk upgraded to 230G. It originally came pre-packaged with Win7 "Starter" edition and a bunch of Toshiba bloatware, which makes it the perfect target for an OS nuke and wicked post-nuclear experimentation.
As my new testbed, this machine is running some of the newest, up-and-coming features of FreeBSD, among which is ZFS. I used these instructions for "Road Warrior" laptop:
http://forums.freebsd.org/showthread.php?t=31662
If you have ZFS, Perl and beadm installed, I wrote this little shell script to dump information from a variety of sources on FreeBSD
What is ZFS exactly? The skinny is that it's Sun's newest(ish) file system that seriously improves on anything else in existence right now. By far and away, it is the most seriously sophisticated file system out there today. It's on OpenSolaris now, and FreeBSD developers have been quietly tooling away at it for a few years now. I expect/can only hope it will ultimately replace UFS.
http://hub.opensolaris.org/bin/view/Community+Group+zfs/whatis
Some things ZFS has that other FS do not:
End-to-end Data integrity: According to Wikipedia's ZFS article (https://en.wikipedia.org/wiki/ZFS) about 1 in 90 hard drives have undetected failures that neither hardware nor software can normally catch. This phenomenon is called "silent corruption", and is experienced at large data providers and small ones with cheap hardware. ZFS can be employed in these cases, to detect and repair silently corrupted data because it uses all kinds of mechanisms to validate and store data that raid doesn't.
Snapshots and Boot Environments: Similar in concept to Win7 restore-points, this feature gives you the ability to create perfect bootable copy (a snapshot) of an existing OS. On the Toshiba, it takes only 3-5 seconds, and uses a nominal amount of disk space (try that on Win7). You can clone these, boot into them, destroy them, mount them or even export them to another system. The boot configurations concept is from Solaris. If you install a copy of a utility called beadm from the ports tree, it emulates Solaris' nice interface in FreeBSD, and offers even more elegance than using the two management utilities: zpool and zfs
No fsck: ZFS uses a maintenance technique called "scrubbing" which is run periodically, as frequently as you would run an SSD optimizer or defragmenter. Scrubbing, unlike fsck, can be run on an online, mounted, active disk, and checks not only metadata, but the data itself for corruption. Auto-repair is done via RAID-Z (ZFS software raid, another feature) or by a kind of on-disk bit replication mechanism which looks into redundant copies for good replacement data. In ZFS, copy-on-write semantics are used, so data on the disk isn't corrupted the same way it can be on a normal file system.
No partitions or volumes: Hardware is organized using into datasets and zpools, or virtualized storage. No formatting, slices or fdisk. You can easily create filesystems within these pools, and other filesystems. You can add disks in as mirrors from the command line, impose quotas on filesystems, reserve storage, share storage, compress, have transactions, and no real limits on numbers of directories, number of filesystems, paths, and other things normally imposed on file systems.
And there are more features. It's almost mind-boggling. ZFS seems to do E0VERYTHING right, and it's just too good to pass-up.
http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
https://wiki.freebsd.org/ZFS
Monday, March 18, 2013
On the quagmire of shell dotfile execution
Have you ever tried to get a third-party full-color 4-directional scrolling pager (aka: most) working under bash so you can see manpages in full color, and have programs like mergemaster call up a special viewing window rather than just scrolling stuff across the screen? You are in for a treat if you use FreeBSD.
Getting this to work is not as easy. And it's all related to the dot-file quagmire underlying every *nix terminal session. The nature of this quagmire is the quad-mode operation of *nix shells and all the dotfiles they potentially execute, depending in which mode the shell is run. And a bunch of other stuff.
Shells generally have two status modes of operation under which they are run: as a "login" shell or not, and as an "interactive" shell or not. A login shell means the shell goes through the login process before it runs. An interactive shell means the shell offers a prompt and waits for user input. Since these modes of operation are either on or off for any given shell session, this gives rise to four potential states of operation of a shell, depending on what kind of task you perform.
- A login-interactive session, like logging-in and browsing through directories at a prompt (which is what most of us mostly do.)
- A session that is neither login or interactive, like a scripted cron job that is run at 3 am the first Thursday of every month.
- A login-only session, like an ssh remote execution command where you tell ssh to login to a machine, do something, and logout again. (I don't do this a lot, but have needed to on occasion. It feels hackish.)
- An interactive-only session, like running a subshell when you are already logged-in, like running one shell from another. (For example, I am at the bash prompt, but want to use sh, so I just type "sh" at the prompt.)
The complication arises out of which dotfiles are executed in for each mode. Depending on how you are running things; the context of the shell's use; and what other shells are installed, different dotfiles will execute in non-obvious ways. There are even system-wide dotfiles in /etc that are searched, and some shells (like fish) will store totally non-standard configurations.
For example, if my shell is /bin/sh, and I login to interact with it (scenario #1) on FreeBSD, a file called .profile in my home directory is executed, followed by .shrc, that has effectively been "sourced", or included from the .profile script by getting placed into the environment. If I have the c-shell as my login shell, it's .cshrc, then .login is executed. If I use bash, .bash_profile is run. But then again, it entirely depends on what system you run and what shells (and dotfiles) you have installed.
In scenario 2, you should generally expect few dotfiles to be called, if any. Scenario 3 might give you half of what you expect, and scenario 4 might give you another or the same half.
To amplify the madness, shells that are related or in the same family (csh begat tcsh, sh begat bash) will often search-for and use each other's dotfiles, if present. Sometimes, the order of their searching isn't clear, it's just whichever they happen to come across first, file a, file b, or file c.
Worse yet, you may have no choice if you want to run any important 3rd party frameworks, like RVM or Git, which depend on the presence (usually) of bash.
This ... chaos has a net effect of making a monumental task out of a simple thing installing a like a pager like most to behave in the way you would expect.
So, here's what I did -- and it sort of works, for whatever it's worth.
First, I went into all the dotfiles and put the line echo "Running $HOME/<dotfile>" near the beginning of each file, to see what exactly was executing when I logged in or ran things in various modes.
Next, I placed the command (PAGER=/usr/local/bin/most; export PAGER) in .cshrc, .mailrc and .profile, just so there weren't any local references to the pagers less and more (both of which I hate because they are unnecessarily painful). Bash has a built-in less-like pager it uses, and you gotta watch out for that one. It can be a real disappointment if it shows up unexpectedly).
Since I am a conscripted Bourne-shell-type user, I sourced .profile from .bash_profile and .login to .bash_login. (Even though this sounds logical, I don't think the second one makes much sense. But there you have it. It's there for the feel-good factor.)
So far, all this seems to work - but not for sudo tasks yet. It's taken hours, and there are still things that don't make sense. But hey if it works, don't fix it.
Here's a resource that attempts to make sense of it all, and I still have problems with it.
https://github.com/sstephenson/rbenv/wiki/Unix-shell-initialization
Good luck!
Wednesday, March 13, 2013
A Notation for Functional Design with Binary Outcomes
I've been working out the design for a Ruby program that builds regular, plain old jails over the past couple of days.
After identifying and reading-up on the problem domain, then writing a set of manual setup instructions for it (still underway[1]), I felt I knew enough to begin design. The program I'm writing is kind of a warm-up to eventually writing a script that installs service jails, which is an advanced setup task.
The program operates at the "glue" layer of coding (see last blog post on Object Oriented Design) and I began writing the thing using the Bourne shell. After encountering a problem with pattern-matching command line options, I became very discouraged about the extremely limited capabilities of the Bourne shell. To the most simple tasks we take for granted in Perl, we have to call two or three other programs in Bourne, and call a sub-shell and use the environment to do it.
The only reason to use Bourne shell scripts is to follow the old Unix grind that states "It's good engineering practice to employ technologies that preexist in the OS, because someone might not have XYZ dependency installed on the system to make use of your program"
Although this may be true, after reading about the speed of Perl one-liners against the awk and sed counterparts for pattern-matching, I don't understand why Perl was taken out of FreeBSD in the first place. Absolutely obstinate adherence to tradition. Perl was part of the base OS until a political rift swept away some of the FreeBSD leadership post-911/dot-com implosion. But what little is wrong with FreeBSD these days is another story.
In any case, I felt like loosing my cookies when faced with the prospect of using such primitive, awkward technology just to live up to an old UNIX grind. So I decided to choose my kind of tool: Ruby!
I decided, as an experiment, to design my jail-building program from the outside-in, with an emphasis first on user interface. I wrote a few short shell scripts with the prefix "simple_jail" and tested them to see if they covered all my use cases from beginning to end:
simple_jail_config
simple_jail_init
simple_jail_start
simple_jail_stop
simple_jail_jump_in
simple_jail_ssh
These scripts have no options, logic, control flow or conditional instructions in them. They are simple linear sets of commands that just work for a single, fixed configuration.
Next, I translated these items into command options:
simple_jail <configure|initialize|rootlogin|sshlogin|start|stop> <ip_address> .. <subargs>
simple_jail configure <ip_address> <hostname> <username> <password>
simple_jail initialize <ip_address>
simple_jail start <ip_address>
simple_jail stop <ip_address>
simple_jail rootlogin <ip_address>
simple_jail sshlogin <ip_address>
These are the options I could realistically support.
At this point I researched my choices for options-handling. I could either write this myself, use one of two Ruby libraries, or use one of several third-party gems. Each of these packages has somewhat limited features, and may or may not support my command line schema above. Although one of the Ruby built-ins looked like a good candidate, I needed to verify exactly what I would be doing for validation checks once I actually obtained the user's options. So I began to write a long set of logical rules like the ones below:
# check arguments
#
# if there is one and only one valid option provided
# and one and only one ip_address provided
# pass
#
# if option is configure
# validate confugure subargs
# one and only one hostname
# one and only one username
# one and only one password
# pass
At that point, I began to ask myself: what would the supporting function names be for doing systems checks on these options rules? And what other checks would I want to do globally? For example:
# Process_and_environment_checks:
#
# script_running_in_jail? true : false
# script_running_as_only_copy? true : false
# script_running_as_root? true : false
# jail_already_running? true : false
I kind of put together a list of function calls that represented checks to options and the system globally, and found myself using a concocted notation to describe sequential program behavior that was helpful in thinking about algorithmic steps.
It's based on the true false short testing line found in many languages which takes the familiar form:
<expression> ? <expr if true> : <expr if false> # documentation
This idiom takes an expression (left), evaluates and tests it for true or false,then evaluates one of the two following expressions for a return value. The true return expression is after the "?". The false return expression is listed after the colon ":". It's kind of a shorthand boolean method.
In my twisted version of this idiom, I use it to represent a line of code in a sequence of function calls in a very high level, hypothetical language:
<function_name> ? <happy-ppath return value> : <sad-path return value> # error message describing sad path
For example, some function calls using this notation return only true or false. These are status checks. For example:
jail_dir_exists ? true : false
The method jail_dir_exists does the checking. The return values are a simple booleans.
In other cases where I used this notation, a function call is made, and if it succeeds, it takes the happy path, and if it fails, it takes the sad path: For example, a function that adds a new jail entry to the configuration file:
configure_add_new_jail continue : exit # could not create new entry
In the case above, the function configure_add_new_jail is called. If it succeeds, the program proceeds silently to the next step in the program sequence. If it fails, an action is taken: exit the program. The part to the right of the hash is the error message sent to the console on failure.
In a slightly more complex form of this idiom, a function calls another conditionally. For example, in a utility function:
user_jail_dir_destroy? jail_dir_destroy! : exit # true = user answers 'y'
Above, the user is prompted to destroy a partly-created or preexisting jail on the system by the user_jail_dir_destroy function. The precondition is that he has, at some point in the past, run the script with the "initialize" option, which essentially runs "make installworld" and ":make distribution" of a new jail from the host's object tree. If he previously built this jail (partly or fully) he might be unaware of it when running the script this time around, and would need to be prompted for which action to take to prevent his previous work from being overwritten.
So the gist of the pseudo-code line above is to provide the logical plan for doing so. The documentation to the right of the hash describes the mapping between the user's response to the [y/n] prompt with the true or false return parameter. In this case, the one needing description is the true parameter, which contains a call to another function to actually destroy the jail directory:
jail.dir.destroy! continue : exit # true = chflags and rm -rf on jail succeeded
Above, if the destroy function fails to delete the directory, the program exits.But the interesting case is what happens if it succeeds, so that is documented with a message - which could be used in debug mode. In this case, if the operation succeeds, the function returns and the happy path is resumed.
So basically, for each option taken on the command line, I have been defining steps to be taken in this revised true/false idiom format. It's not only compact, but can be edited in a plain text editor as something of a low-level program specification.
The interesting thing is, if you get enough of these statements going, class names begin to emerge, suggesting an underlying object model:
host.configure.etc.dir.exists? true : false
jail.dir_empty? true : false
configure.file.entry_valid? true : false
Above, I can replace underscores with dots and get some idea of how refined I want to make the potential class structure. Note I don't have to know what the classes are *before* writing psuedocode. The steps in psuedocode collect, and define emerging class names as I go.
[1] The notes are my own, pilfered from a variety of sources, and should be taken with some advice and caution. They are still incomplete. :
https://docs.google.com/document/d/1y2c1O0mAagWD0Eypw0EuB_BbN5vm0MMjrrwLkvfnVcI/edit?usp=sharing
The Importance of Object-Oriented Design
Design Techniques
One of the more stifling features of object-oriented programming is trying to figure out what classes to design.
Unlike its venerated, comparatively simple tribal cousin, procedural programming, object-oriented programming requires you to invent abstractions in the form of classes that describe the problem domain, or selected parts of it - and package functions with data. The classes you design are blueprints from which objects are manufactured at runtime.
Depending on the language, there can also be inheritance, mixins, access permissions, generics, collections, interfaces, and even Eigenclasses that allow you to mutate the original classes and objects at runtime. And for the truly discriminating, pedantic programmer, there are design patterns.
The big advantage of object-oriented design is the ability to speak to the computer in terms of the problem's domain. The prodecedural approach makes you to speak to the computer on its own terms, requiring you to translate the problem domain into terms the computer understands.
But the object-oriented approach is actually just an extension of the procedural approach. It's just the procedural parts (both functions and variables) are heavily modularized and cast into types - protected data structures. We cannot escape the procedural nature of programming. Programs are implementations of algorithms, and algorithms are sequences of steps -- recipes for solving problems.
It follows that as a complex wrapper to procedural methodology, there will be more design decisions to make when using an object-oriented methodology. And design is all about how to get problem P to solution S with code. The design you devise can vary depending on what type of problem you're solving, and the tools you have available.
This is one of the reasons I am trying to leave Perl in favor of Ruby. I've done a fair amount of reading on Ruby, and don't care to delve more deeply into Perl. Unfortunately, most of my practice with Perl has been through jobs. I was always hired as a "perl programmer" no matter what I did until recently. So, although I've had plenty of exposure and education centering on object-oriented design, I haven't had much opportunity to use it - at work or on my own projects. There just hasn't been a need.
However, what little project exposure I've had, has been revealing when it comes to the task of designing classes. There is more than one way to do it. One piece of advise I can give about programming is that you must know your problem domain very well to successfully address it in code. This is especially important with object-oriented class design, as it requires you to declare the problem domain up-front.
So, getting class design correct before coding starts is very important. With reflection and Eigenclassess, it may not be as critical, because you can posthumously mutate objects at runtime with code. Although this violates some of the principles of class-based object-oriented design (encapsulation) it holds some promise of allowing softer, more simplistic cookie-cutter class designs to be planned before coding gets underway, which can be later be mutated and specialized as the project progresses. This saves time over devising rigid strongly-typed blueprints which become, in a way, immutable before coding begins. Reflection may allow for extra refinement to take place after the initial general design is settled-upon. So in this respect, there may be a tradeoff between encapsulation and flexibility with object design.
Using a softly-typed language like Ruby or Python over a strongly-typed one like Java or C++ still doesn't substantially relieve the pain of a blank slate when it comes to devising a correct or suitable class model to address a domain-specific problem, but perhaps it can reduce the size of the "blank page" you are required to fill in before coding starts.
On top of that, what layer of coding you are doing has everything to do with what language you might be using. By layer, I'm suggesting how close to the hardware you are. Below is a rough diagram of languages vs. layers for most computing problems:
Layer Languages most used
Hardware
Software: No OS (Hardware Engineering: ASM, C, Verilog)
Software: BIOS (ASM, C)
Software: Bootstrap Loaders (OS Engineering: C, Forth, ASM)
Software: OS Kernel (C)
Software: Filesystems and Drivers (C)
Software: Privileged User Mode (Critical Operations: C, C++)
Software: OS utilities (Standard Operations: C, C++)
Software: 3rd party system (Application: Java, C#, C++, C)
Software: Services Frameowrks (Services: Perl Ruby, Python, Java, C#, C++)
Software: Glue, Build, Admin code (Production: Perl, Ruby, Python, Shells)
Software: Domain specific langs (Integration: JavaScript, ERB, Jquery, SQL)
Software: Entertainment, utilities (Sourceless: C++, C#, Java, JavaScript)
Above, I have tried to rougly outline the layers at which coding occurs, and associated heirarchy of useage realms. Each realm can be composed of one or more layers, and there are no firm boundaries.
1. Hardware Engineering - making hardware do something. (embedded)
2. OS Engineering - making hardware do something sophisticated. (kernel)
3. Critical Ops - making sure the OS is running. (critical utils)
4. Standard Ops - making the OS fully usable. (complete OS binaries)
5. Application - making the OS extra usable. (3rd party software)
6. Services - making the OS serve an automated purpose. (servers)
7. Production - using the OS to produce software and services.(glue)
8. Integration - using client installs for consumption (web browser)
9. Sourceless - making self-contained applications (turbo-tax, games)
Whether or not you agree with the categories, there is a distinct migration of lanugages for the different types of code being written at each layer. The closer we are to hardware, the more lower-level languages we see being used. We start with assembly language and veriolg. The further we go from the hardware, the more higher-level languages we see being used, until we are no longer dealing with source code, or even commands, but end-user applications controlled by user interfaces.
Somwhere in-between the OS engineering layer and the critical operations layer we begin to see object-oriented languages being used. Somehere betweeen the applications and services layer we see the emergence of "pointerless" languages like Java. Somewhere around the services and production layer we see "scripting" languages emerging as the language of choice.
So, it's not too hard to see that when we get in a bit to the standard operations layer we see see object-oriented code come into use. And it doesn't go away after that. In fact, object-oriented languages become dominant as we move toward the end-user with his applications, whcih can be very sophisticated programs reaching a million+ lines of code.
Conservatively half of all programming realms, the object-oriented approach is important.
Thursday, February 21, 2013
Ruby RVM redux (RVM on Windows too!)
Installing Ruby using RVM
Using Wayne Seguin’s RVM (Ruby Version Manager) is definitely the best way to go for installing Ruby on most UNIX platforms. It allows any kind of Ruby to be installed, even multiple versions, and will keep everything in the home directory under .RVM, so no admin-level system installations are required. It is very well maintained, and considered the defacto standard for maintaining and updating Ruby installations. Visit http://rvm.io for more information. After installation, use: rvm notes for up-to-date information about the installed framework.
UNIX Installation
These instructions should help you get RVM and Ruby MRI installed. During this process, you must be connected to the network the whole time. RVM will download things during the build.
First, install the bash shell. Bash is the required login shell for the account in which RVM is installed. RVM installs itself as a Bash function. For X-Windows: Make sure your terminal is set to call a login shell with
bash --login
Next, install the package: curl
Do this as root using your OS package manager. The specifics will depend on your system.
NOTE: Do not run the rest of these commands as root. RVM requires they be run with the same privileges as the account you’re logged into.
$ \curl -L https://get.rvm.io | bash -s stable
(Include the forward slash at the beginning of the command line.)
This installs the RVM framework
$ rvm list known
The above command list all the possible rubies that can be installed. We will use MRI version 1.9.3 in this example, but the latest one should be used. That’s the one under the MRI section listed just before -head. The current minor version is p385.
$ rvm requirements
The above command lists all dependencies and tasks needed to build and install Ruby. Execute all instructions given. These are required software installations. Failure is likely to occur with the build or runtime if the steps are not taken. RVM will tell you what you need to do to prepare your system, and hand you the command lines to do it.
$ rvm install 1.9.3
The above command compiles Ruby 1.9.3
$ rvm use 1.9.3 --default
The above command sets the built ruby as current and default version.
$ rvm info
The command above verifies the Ruby installation.
$ ruby --version
Now all that's left is to verify IRB behaves properly:
$ irb
Windows installation (cygwin)
Goto http://www.cygwin.com and download setup.exe from the link on that page. Save the program to your desktop. It is your only package manager. The direct URL is http://cygwin.com/setup.exe
Double-click setup.exe and follow the wizard steps to the package manager screen. That’s the one with all the software categories in a tree. Use the search box to locate and install the following packages and all their dependencies:
curl (net/curl)
libcurl-devel (net/libcurl-devel)
cert (net/ca-certificates)
Open the cygwin bash shell from the desktop icon or start menu. Install RVM from inside the shell:
$ \curl -L https://get.rvm.io | bash -s stable
Exit the shell and re-open it.
Run:
$ rvm requirements
The above command lists all dependencies and tasks needed to build and run Ruby. This is a must-do step. But since the cygwin third-party packages change so much in their availability, you may have to fudge a bit in installing some things listed.
For example, rvm requirements lists the build-essential package to be installed. That is a meta-port of a bunch of build tools. At the time of writing and testing this, the build-essential package was available in the cygwin package manager. An hour later it disappeared without a trace.
Since this ingredient is so important, I’ve listed the tools below that I think were in the meta-port.. When possible, choose -devel versions of the software, and select the mingw ports if available.
Here is a mostly-correct list of packages to select and install:
mingw-gcc-g++
make
mingw-zlib1
libyaml-devel
libsqlite-3-devel
sqlite3
libxml2-devel
libxslt-devel
autoconf (highest version)
libgdbm-devel
ncurses-devel
automake (latest)
libtool
bison
pkg-config
readline: GNU readline
Now compile Ruby 1.9.3:
$ rvm install 1.9.3
And now set the built ruby as current and default version:
$ rvm use 1.9.3 --default
Verify the Ruby installation:
$ rvm info
$ ruby --version
Lastly, verify irb does not crash
$ irb
Using Wayne Seguin’s RVM (Ruby Version Manager) is definitely the best way to go for installing Ruby on most UNIX platforms. It allows any kind of Ruby to be installed, even multiple versions, and will keep everything in the home directory under .RVM, so no admin-level system installations are required. It is very well maintained, and considered the defacto standard for maintaining and updating Ruby installations. Visit http://rvm.io for more information. After installation, use: rvm notes for up-to-date information about the installed framework.
UNIX Installation
These instructions should help you get RVM and Ruby MRI installed. During this process, you must be connected to the network the whole time. RVM will download things during the build.
First, install the bash shell. Bash is the required login shell for the account in which RVM is installed. RVM installs itself as a Bash function. For X-Windows: Make sure your terminal is set to call a login shell with
bash --login
Next, install the package: curl
Do this as root using your OS package manager. The specifics will depend on your system.
NOTE: Do not run the rest of these commands as root. RVM requires they be run with the same privileges as the account you’re logged into.
$ \curl -L https://get.rvm.io | bash -s stable
(Include the forward slash at the beginning of the command line.)
This installs the RVM framework
$ rvm list known
The above command list all the possible rubies that can be installed. We will use MRI version 1.9.3 in this example, but the latest one should be used. That’s the one under the MRI section listed just before -head. The current minor version is p385.
$ rvm requirements
The above command lists all dependencies and tasks needed to build and install Ruby. Execute all instructions given. These are required software installations. Failure is likely to occur with the build or runtime if the steps are not taken. RVM will tell you what you need to do to prepare your system, and hand you the command lines to do it.
$ rvm install 1.9.3
The above command compiles Ruby 1.9.3
$ rvm use 1.9.3 --default
The above command sets the built ruby as current and default version.
$ rvm info
The command above verifies the Ruby installation.
$ ruby --version
Now all that's left is to verify IRB behaves properly:
$ irb
Windows installation (cygwin)
Goto http://www.cygwin.com and download setup.exe from the link on that page. Save the program to your desktop. It is your only package manager. The direct URL is http://cygwin.com/setup.exe
Double-click setup.exe and follow the wizard steps to the package manager screen. That’s the one with all the software categories in a tree. Use the search box to locate and install the following packages and all their dependencies:
curl (net/curl)
libcurl-devel (net/libcurl-devel)
cert (net/ca-certificates)
Open the cygwin bash shell from the desktop icon or start menu. Install RVM from inside the shell:
$ \curl -L https://get.rvm.io | bash -s stable
Exit the shell and re-open it.
Run:
$ rvm requirements
The above command lists all dependencies and tasks needed to build and run Ruby. This is a must-do step. But since the cygwin third-party packages change so much in their availability, you may have to fudge a bit in installing some things listed.
For example, rvm requirements lists the build-essential package to be installed. That is a meta-port of a bunch of build tools. At the time of writing and testing this, the build-essential package was available in the cygwin package manager. An hour later it disappeared without a trace.
Since this ingredient is so important, I’ve listed the tools below that I think were in the meta-port.. When possible, choose -devel versions of the software, and select the mingw ports if available.
Here is a mostly-correct list of packages to select and install:
mingw-gcc-g++
make
mingw-zlib1
libyaml-devel
libsqlite-3-devel
sqlite3
libxml2-devel
libxslt-devel
autoconf (highest version)
libgdbm-devel
ncurses-devel
automake (latest)
libtool
bison
pkg-config
readline: GNU readline
Now compile Ruby 1.9.3:
$ rvm install 1.9.3
And now set the built ruby as current and default version:
$ rvm use 1.9.3 --default
Verify the Ruby installation:
$ rvm info
$ ruby --version
Lastly, verify irb does not crash
$ irb
Subscribe to:
Posts (Atom)